Many research disciplines like Ecology, Ethology or Psychology are considered soft sciences scientifically not truly 'exact' because of perceived complexity of phenomena investigated. But how exact is science in general from an empirical or scientific or philosophical point of view? That empirical observation always differs from theoretical prediction can be illustrated with simple examples. For instance, how many figures after the comma are required to match theoretical prediction concerning dimensions of simple geometric figures? Who decides on the precision of measures required? In other words, how exact is "exact"? Moreover, simple geometric equations describe triangles but how well do the theoretical predictions provided by the ancient Greeks match practice? If children or adults are asked to measure the same triangle drawn on paper, and they apply mathematical equations describing surfaces or perimeters of that triangle, there is definitely an observer effect. Deviations from the human-created theory might be caused by different factors. Measurement precision might change with the thickness of the lines constituting the triangle. Thicker lines might increase imprecision in measurement. The environment at the time of measurement influences perception therefore also determining how triangles are measured. More ambient noise might lower mental focus perhaps having impact on how triangles are measured in a class room. Evidently, precision in measurement will depend on material used.
Thus, empirical measurement of the same triangle provides different results amongst observers even after controlling for support and environment.
If the true nature of nature is indeed variation, human-invented 'perfect' triangles as defined in Mathematics can never be identified with empirical research practice. Why should human-invented theory be right and human practice be wrong? Who decides that human-invented theory not taking natural principles of variation into account is right? If one takes the reality of natural diversity into account, measurements, not the theory, are true. Alternatively, both practice and human invented theory might be considered true accepting all human products and activities result from natural processes.
It is highly important to understand errors in any field of science science, but in the soft sciences it if far more difficult. We are treated almost daily to misunderstandings of error. My thesis advisor, Martin Deutsch, summarized this with a fictional tale. He said he attended a joint conference of the American Physical Society and the American Medical Society shortly after WW II. He was especially interested in a report on cancer causing effects of the tars in cigarettes in chickens. The report stated that 33.3% of the chickens developed cancer, and 33.3% of the chickens did not develop cancer. He was intrigued, and asked what happened to the remaining 33.3%. The speaker replied "The other chicken ran away'.
In the hard sciences, it is repeatedly found that the errors in todays experiments cover up tomorrows great discoveries. Q: Who decides to pursue a reduction of error? A: a scientist who is convinced that the next layer of truth lies within the error. I have pursued the question of the validity of the inverse square law for electrons (and muons) down to a distance of 10 exp -15 cm, because i was seeking a deviation. Too bad, none was found. However, pursuit of measurements of "g-2" for electrons and muons, with reduced errors, by others, has uncovered a world of effects due to vacuum polarization.
It is important to know the magnitude of expected errors. Some can be directly measured by repeating, as best you can, a measurement. The distribution of results (frequently a Gaussian distribution) is a measure of the random error. Unfortunately, a harder error to quantify is the so called systematic error. This type of error results if you are measuring length with a short ruler. Sometimes a systematic error can destroy the assumed basis of an experiment. At the Grand Canyon Gift Shop I noticed a branding iron with the letter "W". The probability of finding the initial of my last name is 1/26. The iron was three sided, and so i looked at the second side. I was shocked to find the initial "R". The probability of my first and second initials being on a random item is 1/676. I previously used the middle name Murray, and my hands trembled as i turned the iron to the third side. Lo and behold, there was the letter "M", a probability of 1/17,576. I was rather shaken when I walked up to the check out counter and asked, "do you have these branding irons with ALL initials?" "Oh sir"", the lady answered, "those aren't initials, that is a steak branding iron, and the letters stand for Rare, Medium, and Well done".
It is highly important to understand errors in any field of science science, but in the soft sciences it if far more difficult. We are treated almost daily to misunderstandings of error. My thesis advisor, Martin Deutsch, summarized this with a fictional tale. He said he attended a joint conference of the American Physical Society and the American Medical Society shortly after WW II. He was especially interested in a report on cancer causing effects of the tars in cigarettes in chickens. The report stated that 33.3% of the chickens developed cancer, and 33.3% of the chickens did not develop cancer. He was intrigued, and asked what happened to the remaining 33.3%. The speaker replied "The other chicken ran away'.
In the hard sciences, it is repeatedly found that the errors in todays experiments cover up tomorrows great discoveries. Q: Who decides to pursue a reduction of error? A: a scientist who is convinced that the next layer of truth lies within the error. I have pursued the question of the validity of the inverse square law for electrons (and muons) down to a distance of 10 exp -15 cm, because i was seeking a deviation. Too bad, none was found. However, pursuit of measurements of "g-2" for electrons and muons, with reduced errors, by others, has uncovered a world of effects due to vacuum polarization.
It is important to know the magnitude of expected errors. Some can be directly measured by repeating, as best you can, a measurement. The distribution of results (frequently a Gaussian distribution) is a measure of the random error. Unfortunately, a harder error to quantify is the so called systematic error. This type of error results if you are measuring length with a short ruler. Sometimes a systematic error can destroy the assumed basis of an experiment. At the Grand Canyon Gift Shop I noticed a branding iron with the letter "W". The probability of finding the initial of my last name is 1/26. The iron was three sided, and so i looked at the second side. I was shocked to find the initial "R". The probability of my first and second initials being on a random item is 1/676. I previously used the middle name Murray, and my hands trembled as i turned the iron to the third side. Lo and behold, there was the letter "M", a probability of 1/17,576. I was rather shaken when I walked up to the check out counter and asked, "do you have these branding irons with ALL initials?" "Oh sir"", the lady answered, "those aren't initials, that is a steak branding iron, and the letters stand for Rare, Medium, and Well done".
Thus, 'exact' is not exact!
Scientists or philosophers might propose that hypotheses that cannot be empirically tested or verified should not be exposed or published. Philosophers or scientists might also argue that science should only propose terminology that can be empirically or verified. Can 'Mathematical Symmetry', or rotating and discrete symmetry transformations as defined in Mathematics be identified with an empirical approach when 'perfect' circles or 'perfect' squares as defined in Mathematics cannot be drawn and measured because of biology-based constraints?
If 'Mathematical Symmetry' cannot be observed or identified in practice should 'Symmetry' be simply replaced by empirical measureable terminology, such as relative 'Deviation', 'Variation', 'Difference', 'Asymmetry'? And what about other empirically immeasurable terminology, like 'Fixed', 'Static', 'invariable', etc.? Evidently, in the vast majority of cases, measuring 'absolute' values with an infinite amount of numbers behind the comma (A = 1,... or B = 2, or A = B) requires much more scientific effort and investment than the identification of biologically perceived relative differences (AB).
Exact is only based on the specific issue you are watching, based on your point of view, the main mistake in psychology is saying "this population match Exactly with this other" because they forget to say: At this specific variable. That road take you tho see that someone decide the exact variable to the perfect condition.
In Maths, you find that the possibilities are not exact but exponential, when 3.1416 is exact?, let´s talk about π http://en.wikipedia.org/wiki/Pi, what is the "exact" number, because it seems we need to make it shorter at any point, and at school we learned that it was enough to say 3.1416...
So any researcher at any point, form knowledge, or computer capacity or any reason decide the exact amount of things, it's easy to say: I have exactly 3 brothers, but what about when someone has a brother with another family, it is mathematically correct that you have 2 brothers and a half?, can you have a "half" of a person?
Empirical science is based on perception, so psychology or neurobiology should evidently have an impact on how 'exact' is perceived. Some will see more detail than others watching the same phenomenon. Evolutionary biologists might state that the brain or the perception mechanism defining vision of 'detail' evolves. What is defined as exact today will differ what is defined as exact in the future.
Marcel,
The business of many sciences is describing the 'truth' about the universe. Hence result should be independent from observer (verifiable). They could be statistical if that is how the natural phenomena operates (e.g. brownian motion of polen in water). We even know the theoretical limits of the precision of some observations (e.g. the uncertainty principle in physics).
Since ancient times, when weight standards were first used, there is a whole industry designed to validate our measurements against standards (in the US we have a national institute for that NIST).
These efforts (and much of the 'scientific method') are designed to circumvent the limitations associated with the fact that science is performed by humans as you describe in your question.
Hello Emmanuel,
Can we describe the 'truth'? The only thing what we can do is to reach what I call democratic science consensus (DSC) with the tools we have available. DSC is reached when so-called 'experts' are willing to tell the same or similar stories in at least one human-defined scale of analysis or perception.
Marcel,
I agree with your definition of 'truth' in science. Note, however, that in science there is a great incentive to prove that the consensus agreement is wrong. This incentive is critical if we are to move closer to the underlying objective truth (whatever that may be).
The fact that science is not only an accounting of what there is but also about the relationships between things means that 'truth' can results in testable predictions (e.g. Higg's Boson). These predictions provide opportunities to challenge the DCS.
Of course, there are always exceptional talents that do not always agree with what is supported by a whole science community... And people can indeed make useful predictions at some scales of analysis and perception. Otherwise it would have been impossible to send a machine to study Rosetta (in this case, I mean the comet 'Rosetta').
This is actually a very deep question. It is the general view of philosophers of science that experiments are theory laden. That is what experiments are actually performed and how the experiments are viewed depends on current theory. Duhem’s viewpoint (1906/1954) is that a single hypothesis by itself whether induced by observation or postulated by a guess is not really science. The essential difference between science and pseudoscience and non-science is that a scientific theory should provide coherent, consistent, and wide-ranging theoretical organizations. Observations are only scientifically relevant to the extent that they give guidance to how theories should be formulated and how they should be refined. Thus, no single observation can ever serve as a crucial experiment to confirm or refute any one specific hypothesis conclusively, taken apart from the whole complex of theory and interpretation. Scientists are always free to add new auxiliary hypotheses to the existing theory rather than to accept any single counter-example as a challenge to the general validity.
Hello Calvin,
Theory is tested with empirical approaches, like experiments. But how often does it happen that observation during daily life provides inspiration in theory development? How long would it have taken to develop 'relativity theory' in Physics or 'parent-offspring conflict theory' in Evolutionary Biology without daily experience providing inspiration? Does theory result from empirical approaches, like observation, or vice versa?
Marcel, in English the word 'exact' is seldom used in this context, the word in use is 'hard' - hard sciences such as physics as opposed to soft sciences such as psychology (even though to be fair to psychology it is becoming more and more of a hard science)
In French however, it's still called 'les sciences exactes', but that's a total misnomer.
Dear Marcel:
Generally speaking most philosophers of Science would say that a theory cannot be tested.
In my paper
A Role for Experiment in Using the Law of Inertia to Explain the Nature of
Science: A Comment on Lopes Celho
Science & Education 18, 25-31, 2009
I discuss the point that Galileo discovered the law of inertia not by any reference to experiment but rather by comparing two theories (geocentic and heliocentric) and asking the question what principle is needed to make the heliocentric theory plausible.
Scientists came to accept the heliocentric theory even though there was no evidence that could decide between the theories for another 200 years.
Holton shows that Einstein was unaware of the Michelson Morley experiment when he came up with the special theory of relativity. He based the theory upon a critical examination of Maxwell's theory.
Thanks Chris. My mother language is Dutch!
And with 'exact' I mean 'precision'. How 'precise' should science, and who decides on the precision required (e.g. the number of figures behind the comma)?
To further stimulate discussion:
Mathematics and Statistics are human inventions aimed to 'artificially' simply the presentation of nature's complexity (diversity, dynamics). Numbers are human-invented symbols aimed to count and quantify phenomena (objects, events, organisms) having human-defined characteristics in common. The need to count with numbers will thus depend on the details taken into account to define or describe phenomena sharing perceived characteristics. If each individual (or object) is unique in physical or biological structure, quantities with shared features exceeding '1' are philosophically not required. In other words, the simplest mathematical based summary of nature's complexity is '1' and the types of '1' to be mathematically defined will match the number of phenomena investigated. Can mathematicians or statisticians with simplified visions of the world impose analyses to biologists accepting nature's complexity and diversity? Scientific observation is translated into forms that can be statistically analysed. Statistical analyses require that some variables are presented as 'continuous', others as 'classes'. When carbon ('C') is used to define a class, all living beings or objects containing carbon might mathematically be grouped together to define a class with 'C' and a class without 'C'. However, the potential 'error' (i.e. lack of precision) with this classification is that each individual 'C' might be unique in physical expression when all empirically accessible or inaccessible scales of analyses are included. For instance, mathematicians might be unfamiliar with Heisenberg principles when particles have to be classified. The mathematical definition of a class should therefore be considered as a relative and simplified concept accepting 'imprecisions', simply depending on baseline knowledge available to those that define a class of phenomena sharing common characteristics.
Cheers
Marcel,
any mathematicians who wouldn't be aware of Heisenberg would be really not very good .... I'm not sure either about your affirmation that Quote Mathematics and Statistics are human inventions Unquote.
I'd rather opine that we humans are not very good at all at math, and that an extremely high % or all of external reality can be represented mathematically by wave functions and/or hierarchies of wave functions - but we're so bad at math that as soon as a system is a bit more complex than a hydrogen molecule, we just can't figure out its wave function .
Hi Marcel,
What a perfect place to add the absolutely necessary understanding of the methods, requirements and role of stats in psychology (esp.)! After years of graduate work, 2 degrees, 1 fellowship, multiple opportunities to test observations (such as those about stats) in real life, I can definitively say from my experience, most psychologists or psychologists-to-be will never be taught stats properly. They won't be used properly, and any results, leading to conclusions ('limitations' notwithstanding), then publications/presentations that filter down to people who actually may benefit had they been given accurate information.
I was taught Feyerabend, Meehl and others along with Against Method as the main text for my Hx of Psychology grad course, so I am aware of the debate about null hypotheses, language use in stats, meeting criteria in order to run a particular stat test, handling missing data, and so on. A great quote about stats goes something like this: IF you understand the 'rules' that govern the approximation of reality through statistical methods, AND you have an appropriate sample, appropriate data, good range of data, and tight controls were used assuming an experiment, like rotating questionnaires, 're-calibrating' trained undergrad research assistants as they soon tire of memorizing their portion of what must be done, the same way, every time and a very cool program that after hard copy data was entered into a data file, you had to re-enter it again, but this time if differences were detected, it beeped, causing the entry person to go back to the original and enter the accurate data. Times 600 participants, times 20 questionnaires...IF you are on top of all this, stats produces nothing more than a snapshot, one way to see the data, constrained by multiple factors. A Polaroid. Useful for specific conditions; to answer a particular question perhaps, but not representative of photography. I ponder that sometimes, my field, a field basically formed by piles and piles of polaroids. And how quickly that pile becomes a headline in the NY Times, or peer-reviewed journal, or presented during a conference.
IMO, the way we have defined science today makes it impossible to be 'exact' as most non-philosophers would use the term. For instance, the U.S. Supreme Court used the 3 terms: “Good Grounds,” “Good Science,” and “Reliable Foundation” and basically they are treated as requirements met by properly following the steps of the scientific method, using creative, non-logical, logical, and technical methods with the proper applicable procedural principles and theories with desirable personal attributes and thinking skills. Further, It has been stated that “knowledge is our biggest industry.” So...what is knowledge? According to the U.S. Supreme Court and many, many others, the scientific method is a natural one and the method of knowledge. Not "attempting with the best methods available to discover 'truth' or approximations of it", but THE method of knowledge--the phrasing rips any of the veil that is left which covers up the current state of affairs, of science, of terminology, of investigating, and of the most important fact: knowledge comes from using the scientific method. Period.
Sadly, Roger Bacon, proper 'father' of the S.M. described some steps one could take to improve how science had been conducted before (once he got his hands on translated copies of Greek scientists' work), not a method. But he recognized it did resemble a method, a recipe, and wrote a LOT more about how NO method, NO list of steps, written by NO ONE that would ever get humans close to knowledge. And that was simply because our perceptual development, from birth and before, is malleable, easily altered, during years we don't remember and aren't capable of understanding that as we no longer need diapers, we are functioning in (well, us) a country that purposefully alters reality, thus perception, every single day. And not only are we unaware for a long time of the nature of perception in humans, it takes longer to admit if one buys that, then one logically would admit that he/she can't perceive accurately. Bacon was not by far the only person, historically or currently, to have presented a version of this. It has never changed the way we teach, raise children, teach children, run marketing, anything about our culture. Bacon's point was throw out the S.M.until you can USE it...because of the terminology, higher order cognitive skills needed to properly execute the steps and the implied happy ending (ha--that happy ending is SO close once a person is comfortable re-writing his/her hypotheses to match results, add some new articles, come up with some superficial implications, and you are off), we are up against barriers we haven't in at least 1000 years solved ('we' as in 'the West'). What classes do we take where we are actually, really taught how not to make assumptions? Saying "Just Be Objective" IS NOT ENOUGH. In fact, grad school is considered one, long conditioning/socializing course--the opposite of stripping away the conditioning that allowed us to (perceptually) develop without noting the anomalies, without making a real effort to try to change. Bacon was trained in methods that, for the right person at the right time with the right teacher, do in fact tackle our biggest problem--being human.
For contemplation, In1993 the Supreme Court decision involving Merrill Dow Pharmaceuticals, Inc., reviewed the definitions of scientific evidence, scientific knowledge, scientific validity, and good science. As part of this case, the American Medical Association et al. filed an amicus brief in support of the respondent and stated:
“Scientific Knowledge” within the meaning of Rule 702 is knowledge derived from the application of the scientific method."
Good night
Thank you:) I enjoy working hard on prepping courses, having conversations and discussing topics like these... but I haven't come anywhere close to being 'brief'... so I appreciate the latitude and feedback. Take care.
The space probe lands on mars because geometry and physics are true and the formulas are correct. It is not a question of the lines on paper being thicker or thinner, because it is not about the perception of depicted forms.
If you insert different values in the correct formulas, the space probe lands somewhere else. But the formulas don’t become incorrect.
Anyway the formulas too are human-invented.
Hi,
I think the space probe lands on mars, which does not imply the formulas are truly exact or very precise. They are just exact or precise enough so that the space probe can land on mars.
I think the question of precision is located on the “value-side” - i.e. the formulas themselves are not precise, but correct or not correct. They contain a “blank space” for the values.
Some researchers are good in inventing correct formulas, others are good in getting more precise numerical values. Both activities are “human practice”.
There can be a mistake on the “formula-side” too: when a formula was chosen for a purpose that was not the one that should be applied.
I missed this last part. If I understand Martin's point, perception is not the issue because, and I believe this is implied in your post, reality is not appearance. IF that is even close to the point you were attempting to make, it is another example of how little most of us know about perception. As in, you, or anyone, isn't just going to know that something (a formula) may still work regardless of its appearance without some training in overcoming tendencies to make assumptions (for one). So that observation 'doesn't count' as the majority of people, including a much too high number of educated people, don't in fact know (we may mean something different by the word 'know" too) appearance may or may not indicate function/reality; and that discussion alone has hundreds if not thousands of years invested in it thus far. We in the west in fact may be the most prone to making appear/function mistakes...its interesting to see the interactions between parents and children in public places that instruct the child to not follow up on his intuition, or thoughts, but to accept what we can plainly see. About as interesting as it is when scientists do it.
Hi Kelly,
Appearance is basic for everything. “There seems to be a tree” is basic for “There is a tree.”
I did not say “reality is not appearance” and it was not implied in anything I said.
We should not try and divide scientists into two groups, those with intuition and those who stick to numbers. Researchers need a lot of intuition.
My point was about intermediate instances.
In his original question Marcel Lambrechts spoke about the observation of triangles with thicker and thinner lines. I replied that in applied geometry it is not about the question how thick the lines are.
If you avoid a road accident by intuition there is a good portion of experience with geometry in that. Maybe there was a triangle between the two cars and the corner. And there are lots of other things implied here including appearance, observation, intuition, experience of powers, your fast reaction, the wet street and anything else we want to consider. Nobody wants to exclude anything.
How about this question then: “The triangle between the two cars and the corner, had it thick or thin lines?”.
Hello Martin,
I am glad you gave the question/discussion another try, as I meant no disrespect, nor intended to divide or offend anyone. I am pretty sure that unfortunately, there is a language barrier either due to how English is used by both of us, respectively, as well as a training difference in concepts, terminology, etc. I appreciate your patience, and the chance to carify a few things. I do know you did not say 'reality is not appearance', and part of my answer was dedicated to pointing out that we, as a group of human beings, have struggled with the appearance vs. reality or appearance vs. function question, in all fields, for thousands of years. It is a problem for us, because appearance is not reality; however it can be related to, indicative of, or coincidentally causal to begin, so I can accept appearance being part of our reality, because it is. The question for me is, in any given instance, is appearance reliable? Accurate? etc.given it is part of our world. I mean no divisions intended unless it appears divisive to point out that one can be trained and can work on their perceptual development to the point that they understand both appearance and reality (and the relation between them) and they have moved on to seeing things differently.
I am not sure what you mean by "appearance is basic for everything" which is a problem on my part. I thought about it, and not being tied to any particular philosophical branch or approach to this issue, I thought perhaps your statement referred to "app is where one starts," "app is the key, the basic feature which needs attention," or maybe, "its clear at least some of the time that what you see is what something IS." I fully concede I could be wrong.
Yet I'm not attempting to lecture to you, rather, discuss a topic that is huge in developmental psychology, with a long philosophical and mystical tradition, and is more important today than ever, as people in general do not know that because something appears to be a tree, doesn't make it so. Piaget/Flavell spent a lot of effort documenting this, Baudrillard's body of work deals with it in one way or another for decades, and Rumi, one of the most well-known mystics (~12th century) dealt with the topic frequently. I always thought that was one of the biggest mistakes committed by the behaviorists-- having been taught by one who was maybe 10-15 years behind Skinner in his professional development/career, and who stated behaviorism clearly was the most scientific field because it focuses on what you can see, which is much more effective and ethical than guessing. Yet, even in a short 100 year existence, modern psychology has produced nearly unlimited evidence that supports we don't see accurately, therefore visual observation cannot be considered superior to, say, recording one's thoughts. Further, studies show people aren't just performing one behavior at a given time, so the idea of a target behavior is not useful; that if a mentor is standing near a student who is conducting a behavioral assessment/baseline, the student's paperwork suddenly reflects more of the behavior their mentor is researching,often without conscious awareness; and if we go back historically, the point that we 'see what we want to see' has been woven throughout the study of human development. My reason for posting originally was to point these things out and highlight the fact that, as a human race, we are still having trouble with appearance versus reality. I'm not sure if math, or mathematicians should be excused from the people with this trouble....Korzybkski did think that math was the one language that greatly decreased illogical thinking AND the development of mental illness, but that's been shown to be otherwise:) Take care, Kelly
Could you focus your comments on following sentence:
If the true nature of nature is indeed variation, human-invented ‘perfect’ triangles as defined in Mathematics can never be identified with empirical science practice. Why should human-invented theory be right and human practice wrong?
Dear Marcel,
Your question was about truth. This is a very high standard. Most researchers are satisfied with predictability.
In order to make predictions we attribute patterns to nature. These patterns were designed to reduce the variability. After a while we find better patterns and so on. But we cannot rely on these patterns as all of us know.
One might say: for some purpose (not for truth) it is enough to take into account only a small part of nature. This is problematic because everything we use – meters or theories – presupposes other theories that were established before. Optics and electrical engineering were used to build a microscope and so on. No theory and no practice stands alone. But we cannot question all theories at once. This is so because we must presuppose some theory to question a theory. Can’t we get rid of this complexity and variability?
In our daily life we do not take into account all of the variation and complexity that surrounds us. Otherwise we would lose our capability to find our way home or to recognize our beloved ones. Is this a good strategy for science too?
In science the problems begin when you want to talk about variation. Variation of one thing? Than you will have to abstract from many differences in order to identify the thing as the very thing you referred to before. - But if you say that they are two things because of the many differences, then there is no variation. So far we were not talking about truth but of predictability.
Truth: should we assume the highest amount of complexity and variation in order to catch the truth? - The “whole truth”? Unfortunately we would then lose our ability to use our normal language. This is so because of the problems with reference and meaning.. How can we make sure that our words refer to the same things? We could not guarantee this and this would have bad consequences for our pursuit of truth. This is a problem not only for our communication but also for our individual thoughts. We could not be sure anymore that we ourselves mean the same things after some time.
We have to distinguish two different problems here: identification and language use. It is not so that the child only has to learn to identify an object as the same object as before. The child also has to learn that there are words for an object that can reappear after a period in which it was absent.
Truth in Science: If we must be aware of more and more details (variation) in our search for truth, we will never succeed due to problems of reference and meaning of words.
Martin, I really appreciated reading the sheer amount of information you touched on and at times went into depth in your answer. I had actually attempted an answer to Marcel, and it disappeared, something I am sure people are familiar with regardless of the forum. Over time, when that happens, I take the time to re-write the answer, and leaving any 'reasons' I may assign to this process out, I see you covered so much ground. I will try to put together the answer I had written from memory, and if there is any overlap that I perceive between our answers, I'll try to address it.
First, I must agree about terminology, whether one is using a child's understanding of language (and how it changes over time) or a semantic view ('chair' is not the thing, just the name) or even transactional analysis (the context of the words, whether they represent a parent, child, etc.). All matter. Thus, I will try to be as clear as possible.
I started with the last question: "Why should..." It shouldn't. It may, or may not, or may under specific circumstances, but in this case, even if 'should' is being used colloquially, it changes the fundamental discussion that I see, fully aware I could be wrong about the fundamental discussion. Is this another way of framing the issue that is consuming the CogSci/AI field (or partially consuming it) which goes something like, "We are advancing in our abilities to create robots, programs, etc. in an interdisciplinary format yet finding that once we have created a movement, or 'decision-making' ability, we see how different real people think, and move. So do we create in line with reality...that is, use real human traits or do we stick with attempting something (in some cases) 'more' or 'better' by sticking with the most remote concepts derived from math, engineering, etc? That is the way I understood your question, and if I am off, then the remainder of my comments will be too, so you have my apologies in advance.
I don't see perceptual development, which is a long process, involving not only the fitness of sensation organs, but primitive and higher order pathways that sensation can travel once "inside", multiple sensory pathways that we are just beginning to understand (i.e., proprioception, interoception, exteroception), epigenetics and its role in our development, and so on. Epigenetics renders what I believe has been one of the main goals of developmental psychology moot: predictability. I am speaking very specifically about human development and our desire to predict, control, prevent and optimize. Once one has even a rudimentary introduction to developmental epigenesis (epigenetics for dev psych) it is clear that dev is not predictable, thus devising programs to enhance, optimize, prevent and in other ways improve development (and the final product) will be much more difficult. Not saying at all we should give up, but that I honestly believe dev. psych will need to be familiar with genetics, biology, the body as well as the 'mind'--the psych aspects of human life in order to be effective.
Back to my initial statement, and perhaps this is where problems lie, but I do think that there is a basic perceptual dev process that if from birth (when parents technically can begin to alter perception) is followed, then the final product is still not optimal perception, but much much closer than we reach on a daily basis. Then the person would need to go through a 'getting to know oneself"process because the OUTCOME varies, and each of us has different perceptual abilities and a tailored process to eliminate perceptual problems that simply are part of being human, assuming the individual desires it. As Roger Bacon found, there are mystics who have refined this process and under the right conditions, right person, right teacher, right purpose...will work with you. So what appears to be variability in perception I think is due to the majority of (at least in the west) parents altering perceptual processes, altering the child's ability to assess reality, trial and error being eliminated, the desire to have perfect children, who then 'mess up' at 40 when they have a mortgage, family and most of their development is 'set' versus allowing them to do what looks like messing up as they grow, have a safety net (parents) and their perception is more malleable. I worked with enough families to understand that often the problems arise out of ignorance, and this is a huge barrier, but that doesn't change my mind that a more regulated, systematic, life-span approach to perception is available, necessary and that what appears to be human variation in perception is due to 99% experiences based on ignorance. Any small percent of variation to me, in this case, is akin to doing one's very best with internal controls/experimentation and having random error left over. I don't think you were referring to random error. If we did begin to regulate perceptual development the way we regulate eating, sleeping, and other activities in childhood, with the proper training and understanding, we wouldn't be discussing much of what we are. We would know it. We would be using capacities we have fully, and be discussing whatever at a more complex, accurate, closer to the truth level. Hard to imagine that kind of change, given our society here includes a majority of adults who don't reach abstract thinking (aka Piaget). It matters. It matters when you teach, drive, shop, make friends, witness/engage in conflicts, etc.
I hope this make sense, and you have a good day. Kelly
Here is a proposal: The "exact sciences" are identified by the ability to construct uncertainty budgets. The existence of a valid uncertainty budget requires that the random variation of the measurand can be evaluated, and that the systematic errors involved in the measurement are well-controlled. This is what is exact about the exact sciences. The difference between idealised and real triangles (how thick are the lines?), interesting as it is in other contexts, is not the issue.
Marcel, when you ask "Why should human-invented theory be right and human practice wrong?" it seems to me that you are confusing the "perfection" of the triangle with "rightness". The triangle is idealised, not "perfect" in some way. It is saying things about what is logically necessary given certain axioms: it is not actually saying anything at all about the natural world.
It seems to me that the "true nature of nature" you speak of in the Question is, logically speaking, ineffable. What is, is. It cannot be "defined" in principle. We can say certain true things about what is, but we cannot say every true thing about it (not in finite time, certainly; and probably the number of truths that exist is uncountably infinite). Your suggestion that "the true nature of nature is variation" must therefore be at least misleading if not altogether wrong!
The example I like of "exact" science is the measurement by Bessel in 1838 of stellar parallax. Astonishingly beautiful, and entirely expected, being conceived more than two centuries earlier by Galileo.
Hello Chris,
I agree with you statement: What is, is. It cannot be "defined" in principle. We can say certain true things about what is, but we cannot say every true thing about it (not in finite time, certainly; and probably the number of truths that exist is uncountably infinite).
In that case you could indeed ask the question: What is 'variation'? We cannot define it....
We can say certain true things about what people call 'variation', but we cannot say every true thing about it... The conclusion that 'the true nature of nature is variation' is therefore misleading. Is there any scientific evidence that something is what people name 'fixed'? All what is studied on Earth moves in space, so we cannot say every true thing about 'fixed'. Perhaps there is different definitions of what scientists called 'fixed'? 'Fixed' in statistical analyses versus 'fixed' in space (Physics) versus 'fixed' in chemistry...?
Marcel, sorry, but I don't understand your response. (I am sorry I don't speak French!) What has "variation" to do with "exact science"? (I suspect a translation problem here?)
Dear Chris,
Perhaps you could have a look at the 'Lexicon of arguments' from Martin Schulz. The history of terminology is outlined at that site (accessible through internet).
Concerning 'exact' science, can variation (e.g. change of a phenomenon, difference between two phenomena) be measured in an exact way. The answer is 'never', I think.
@Marcel
The definition of exact is good enough. Good enough is defined by the question asked.
Too much of science is committed to null hypothesis testing and a simple difference of probability without accountability. A scientific question revolves about logical premises. One defines the question in simple terms (premises) that lead to a particular conclusion. At least one of the premises establishes what is needed to reach the conclusion, i.e., good enough to show the logical solution.
So much of science has reduced logic to statistics rather than probability. A logical statement does not involve probability, but the limit of measurement often requires that a probabilistic solution be applied to the logical structure. The null hypothesis approach reduces the investigation to statistical methods that avoid defining good enough.
Null hypothesis testing has devastated science by giving an easy way out. Statistics defines good enough, not the logical question.
Your example of the triangle comes under good enough and serves to illustrate what is involved. An ideal triangle is easily defined and constructed in Euclidian geometry. The measurement of any triangle has a degree of difficulty depending upon the definition of good enough. The ideal triangle has no thickness of line so errors of measurement are not considered in the ideal.
A measurement of a triangle for a specific purposes requires criteria for the measurement. The criteria should define good enough. Anything more than good enough is wasted effort.
Isn’t it in part a question of what we call a repetition and what we call a new event? It is good to treat everything as new in order to acknowledge the particularities of a situation, especially when persons – patients – are involved.
We can also say “… a new hill, new water…” if we like. But I think it is not so useful to ask whether the laws are the same.
You know the old discussion about universals. There was a philosopher – W.V.O. Quine – who first denied that there is an entity “appendicitis”, because the cases have nothing in common (no virus or anything else). Later he came to recognize it as an entity.
For me, in history no event ever was repeated. But there are repeating forms and strategies. When a strategy fails after being successful in earlier cases, then it fails because the world does not always do us the favor to remain constant.
@Joseph Exact = good enough.
We do not need to know every tiny detail of the world to survive and reproduce and be happy. We also see that because of science/medical progress the living conditions of many people significantly progressed.
Marcel, perhaps (not to be exact), exactness is influenced by past exactnesses and the implications of not being exact. Future scientific efforts always attempt to surpass past records. Errors are also attempted to be reduced on the basis on of the "destruction" these errors effect.
How exact is 'exact' in science practice? This is ambiguous. When scientific practice is research, the exact is exact enough to be testable (Popper, Logic of Scientific Discovery, section 36. in applied science, the degree of exactitude depends on aims. Thus the degree of water desalination depends on its aim, whehter it be irrigation or drinking.
To get a grasp on the inexactness of the sciences, it is already a norm to think of "doing science" as conjecture-making and the whole process of defense against falsification (I am thinking along the lines of Karl Popper). In physics, the continual quest for a unifying framework and the tasks of integrating quantum mechanics with relativity has turned up many quagmires- such as the statistical nature at the quantum level and the related Heisenberg uncertainty principle. In mathematics, Godel's theorem dealt a blow to the axiomatic program. The quest for universality is tempered by the Duhem-Quine thesis, while the formalism of axiomatics has opened up seeming endless possibilities. So on both the empirical and axiomatic fronts, there are indications of the "controvertibility of truth". Thus the role of scientists becomes one of specifying the "range of tolerance" for truth in their field, as opposed to establishing doctrines written in stone. Some may go as far as arguing that the accepted "range of tolerance" is a function of the dominant paradigm (i.e. Kuhnian paradigm shifts).
Yes Indeed. It must be 'good enough', and defining what is 'good enough' can only be based on human consenses defining 'tolerance', and why not 'science-based empathy'?
Best regards,
Marcel
An example of science-based tolerance/empathy: The team invested so much time and Energy in that study that, although not perfect, we will accept it for publication also because it will stimulate more thinking/research in the field….
Cheers
Hotelling’s starting point is the observation that if one of the sellers of a good increases his price ever so slightly, he will not immediately lose all his business to competitors – against the predictions of earlier models by Cournot, Amoroso and Edgeworth: Many customers will still prefer to trade with him because they live nearer to his store than to the others, or because they have less freight to pay from his warehouse to their own, or because his mode of doing business is more to their liking, or because he sells other articles which they desire, or because he is a relative or fellow Elk or Baptist, or on account of some difference in service or quality, or for a combination of reasons. (Hotelling 1929, p. 44)
The problem with the predictive power of economic models is that only the initial assumptions on which the model is based may be right, for instance when those that construct models use self-experience as a guideline to define initial assumptions. But from a practical point of view, people producing economic models do not have quantitatively/empirically access to the motivations and underlying causations of motivations of individual costumers determining trends in consumption at the population level? Or to put is simple, modelers do not truly know the populations they wish to study to make predictions reliable from an empirical point of view?
Can human behaviour be predicted?
http://health.answers.com/Q/Can_human_behavior_be_predicted
Do economic models always accurately predict economic behavior?
No, economic models don't always predict economic behavior because models are based on assumptions, or things that we take for granted as true.
An exact statement in Economy/Biology:
- When the resource (e.g. plastic) is not there, it cannot be used
- When the resource is not available, it cannot be used
Nancy Cartwright has done some classic important work on the relation between truth and explanatory value, and on keeping them apart (https://www.jstor.org/stable/20013859). She also argued that the descriptive and the explanatory aspects of laws conflict. “Rendered as descriptions of fact, they are false; amended to be true, they lose their fundamental explanatory force” (Cartwright, N., 1980, “Do the Laws of Physics state the Facts,” Pacific Philosophical Quarterly, 61: 75–84)
A 'description' is not more than an individual interpretation of an observer/perceiver, and therefore the 'fact' is represented by the individual interpretation of the description?
An 'explanation' is not more than an individual interpretation of the 'truth' of an observer/perceiver?
Marcel,
to be frank, your questions - assuming you directed them at me - went a bit over my head. I'll try responding to them, but please feel free to correct me if I misunderstood:
A 'description' is not more than an individual interpretation of an observer/perceiver, and therefore the 'fact' is represented by the individual interpretation of the description?
I do not think that a description is nothing but an interpretation. That is, not all things which can be described require a process of interpretation.
Your phrase "an individual interpretation of an observer/perceiver" is ambiguous: Do you mean that descriptions are interpretations of agential behaviour? That cannot be it, as it seems trivially false (for there are many things in nature which are not agential, which are neither observers nor perceivers), so the second option is that you mean that all descriptions are interpretations by agents. Again, I would say no: Descriptions may always be made by agents, but they need not be interpretations (except, perhaps, if you are a pantheist and believe that nature always requires interpretation, say, as the expression of God's will).
An 'explanation' is not more than an individual interpretation of the 'truth' of an observer/perceiver?
I have similar problems with this phrase as the ones I mentioned above. Explanations can be correct or false, adequate or inadequate; but I cannot see what makes them "individual interpretations". Again, depends on what you believe - maybe you think that there is no truth per se, or no facts, or something like that. But none of this would be self-evident to assume.
Each eye-brain will perceive phenomenon X in a slightly different way, and therefore will 'describe' phenomenon X in a slightly different way, especially when the phenomenon becomes structurally more complex. When then a description can be considered 'true'?
In this Framework, I would define the transformation from 'perception' to 'description' as 'interpretation'
Marcel,
thanks for your elaboration. You seem to express the faith that facts can somehow be reduced to facts about individual experience (which resembles Carnap's project in his Der logische Aufbau der Welt). In Word and Object, W.V.O. Quine expressed a classic - and, I believe, decisive! - objection to this view:
"The usual premium on objectivity is well illustrated by ‘square’. Each of a party of observers glances at a tile from his own vantage point and calls it square; and each of them has, as his retinal projection of the tile, a scalene quadrilateral which is geometrically dissimilar to everyone else’s. The learner of ‘square’ has to take his chances with the rest of society, and he ends up using the word to suit. Association of ‘square’ with just the situations in which the retinal projection is square would be simpler to learn, but the more objective usage is, by its very intersubjectivity, what we tend to be exposed to and encouraged in".
In short: To describe something as a 'square' does not require having a square-shaped retinal perception, and this fact pretty much extends to everything we perceive and describe, and for which perceptions are truth-makers or prompters of descriptions. Facts are not reducible - and, in practical usage, are de facto not reduced - to facts about individual experience. Hence, neither are descriptions.
I think that if we quantify description for a large population of eye-brains by suitable indices theses indices will follow statistical laws. For example, plot the frequency distributions of these indices might follow normal distribution. The means of these normal distributions can be considered as a best estimation to the 'true indices'.
Once upon a time I wrote:
Many research disciplines like Ecology, Ethology or Psychology are considered ‘soft’ sciences scientifically not truly ‘exact’ or ‘precise’ because of perceived complexity of phenomena investigated. But how exact or precise is science in general from an empirical or scientific or philosophical point of view? That empirical observation always differs from theoretical prediction can be illustrated with simple examples. How many figures after the comma are required to match theoretical prediction concerning dimensions of simple geometric figures? Who decides on the precision of measurements required? In other words, how exact is ‘exact’ or how ‘precise’ is ‘precise’? Moreover, simple geometric equations describe triangles but how well do the theoretical predictions provided by the ancient Greeks match practice? If children or adults are asked to measure the same triangle drawn on paper, and they apply mathematical equations describing surfaces or perimeters of that triangle, there is definitely an observer effect. Deviations from human created theory may be caused by different factors. Measurement precision may change with the thickness of the lines constituting the triangle. Thicker lines may increase imprecision in measurement. The environment at the time of measurement influences perception therefore also determining how triangles are measured. More ambient noise may lower individual mental focus perhaps having impact on how triangles are perceived and measured in a class room. Evidently, precision of measurement will depend on material used. Thus, empirical measurement of the same triangle provides different results amongst observers even after controlling for support and environment. If the true nature of nature is indeed variation, human invented ‘perfect’ triangles as defined in Mathematics can never be identified with empirical science practice. Why should human invented theory be right and human practice wrong? Who decides that human invented theory not taking natural principles of variation into account is right? If one takes the reality of natural diversity into account measurements, not theory, are true. Alternatively, both human practice and human invented theory might be considered true accepting all human products and activities are products from nature.
Marcel,
I agree on the problem in principle; what I meant to object to above was not the dichotomy between theory and empirical measurements, but the emphasis on individual measurements and individual descriptions. Theories about objective facts are not meant to account for differences in individual descriptions, except in cases when these objective facts are themselves part of psychological processes (such as facts about brains or retinal projections). That is, what theory has to account for are objective descriptions of facts per se, not subjective, individual descriptions.
Definition of 'description': (written or oral) discourse intended to give a mental image of Something (Dictionary)
The Oxford Dictionary offers "A spoken or written account of a person, object, or event" as a definition of description, which I think is more accurate.
If you agree with the "mental image" wording, which I find problematic (or at least ambiguous), would you then say that empirical theories mean to account for mental images? That seems rather hard to swallow. I do not think that, say, quantum theory is meant to account for our perception of reality - it is meant to account for reality itself. It is just that our access to reality, and hence our way of validating theories, is by perception.
Quantum theory is the outcome of brain processes, or not? These brain processes are somehow associated with perception of Something? I Don't see very well how a theory can be validated without some kind of perception of something
It cannot; but the theory does not account for the perception, but for that which is perceived.
You develop a Quantum Theory and you stop developing it when you get this perception of a feeling of satisfaction, e.g. Eureka! Who will tell you that this feeling of satisfaction truly reflects what you call 'Reality' besides the experience of the feeling of satisfaction?
https://en.wikipedia.org/wiki/Eureka_effect
From an evolutionary point of view, why should natural selection have selected brains that are able to develop complex Theories that are not directly associated with Survival and reproduction?
Your knowledge. I suspect that scientists' "Eureka" moment is hugely contingent on the rest of what they know.
Typically, scientists will also have some sort of pertinent psychological self-knowledge, such as "whenever I have a Eureka moment, I am likely to have stumbled upon a valid theory". In any case, if they aim to publish a paper about the theory which gave them that Eureka moment, reviewers are unlikely to accept "I had a Eureka moment" as justification of that theory.
Dear Joachim,
What is the underlying mechanism of an account? It is what people experienced, so the account should be individual-specific?
‘Exact’ May be exact in Applied sciences, but has NOTHING to do with ‘exactness’ in human sciences!
By "account of X" I mean a set of theoretical descriptions which explain X. I am using it in a scientific context, meaning that it does not refer to subjective experiences, but to objective phenomena.
Additionally, X may of course be perceived, and hence the subject of individual perceptions, subjective beliefs, etc.. However, this is not necessary, as many things which are objects of scientific theory are, strictly speaking, not perceived (such as quantum phenomena).
If we cannot perceive it, how do we know it is 'true'?
From an empirical point of view, nobody observed 'energy' or 'force', only the effects of invisible phenomena we named 'energy' or 'force'. Physics is thus based on the assumption of the existence of invisible phenomena of which only the effects can be perceived, and thus studied?
In my opinion, a scientific worldview which includes theories about unperceived objects is justified if it explains the world better than one which does not include such theories. Empirical justification of such theories is indirect, exactly as you wrote: We perceive the effects of such objects.
"Mental states" are a further classical example of explanatory objects which no one has yet perceived, but which explain a lot about human and animal behaviour. In this case, behaviours are things which are perceived. We assume that they are caused by unperceived mental states, and this assumption has explanatory value.
I believe that accuracy is a goal to which we can go with our scientific activities, we will always be approaching it or a new paradigm, but we will never be able to have it completely. I also think that science is closer to it, unlike the Pseudoscientific, for example, psychology, where each milestone can approach us or move away from it without us realizing this
Dear Ruben,
Science practice as a behavio(u)ral expression can be not more than one aspect of 'Psychology'?
Marcel M. Lambrechts, As a researcher of both hard sciences, as engineering in several branches, as of soft sciences such as economics, sociology and administration. Based on my empirical knowledge, I consider that scientific behavior has a motivation beyond the psychological. It is rather the search for truth where the metaphysical is present. In the spirit of a superior being, who has created all things and creatures. Although in these times we no longer want to refer to it, because it makes us smaller and more ignorant: humble.
How can science be 'exact' when scientists close their eyes towards 'potential reality', e. g. because of social pressures?
Example:
https://www.youtube.com/watch?v=iROkeC3lmVA
I think no social pressure can withstand "potential reality" with "sufficient" scientific evidences.
Psychology: 'Evidence' for some is 'not evidence' for others?
https://www.youtube.com/watch?v=WFhDG8GpD44
Science has a base and superstructure. The basis is practical work, experiments, etc. The basis includes the most established philosophies.
1. Production activity, the use of engineered mechanisms, experiments. This whole group is preserved with any theoretical variations.
2. Philosophical generalizations:
1) The world is knowable. Otherwise, science does not exist, but there is a set of opinions.
2) Phenomena are interrelated and condition each other.
3) Matter can not be born "out of nothing" and disappear "without a reason."
The criteria of the scientific nature of truth are formed by philosophy. It is the most profound generalization of the practice of people. Without criteria, scientists could not reliably separate errors from truth.
Unfortunately, the modern philosophy of positivism is afraid to make the criteria. It deals only with the methodology of science
Useful definition:
Science (from Latin scientia, meaning "knowledge")[1] is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[2][a]". (Wiki)
I did not write about the scientific definition of the term "science". I wrote that without criteria there is no science, but there is subjectivism.
Here the work called MATERIALIST THEORY OF SCIENTIFIC TRUTH KNOWLEDGE exists (in Russian).
If the foundations of science are based on 'self-experience', why needing a 'theory' given that 'self-experience' reflects better the local conditions of experience than 'theory' reflecting a human-made class/average of 'self-experiences' that do not reflect 'true reality', only a pattern?
Own experience has subjective positions. Therefore, he can not have a reliable character. Philosophical criteria are needed because materialistic philosophy is a generalization of the whole historical society of people.
How can you develop philosophy in the absence of 'self-experience already reflected in simple observation'?
What is exactly materialistic philosophy?
Talib,
I am impressed by your simple statement. Perhaps there is an exact definition somewhere, but that cannot be reflected in human words. Note that human words already represent a human-made simplification of the expression of 'anything', right?
I believe that absolute truth is unattainable. Knowledge and search is the way to this truth. But criteria are needed to separate truth from error. These criteria are in a large generalization of all human activity, i.e. in materialistic philosophy. It helps to formulate an increasingly comprehensive system of criteria, as a generalization of human experience.
There are trillions of chemical reactions in a single human body that you never will be able to quantify because of constraints, so you are working/experiencing with a biological tool/material you Don't really know to find out how the world is functioning?
Just consider the 'human experience' as a 'camera-like experience', e.g. you as the human observer are the camera capturing images that happen at the surface of physical entities?
What is the true physical expression of a being X that changes in size in relation to the distance between the observer and being X, and that is perceived in different ways depending on what organism is perceiving being X?
http://siriusdisclosure.com/
There is all this discussion about the existence of living beings called extraterrestrial having mental abilities that are substantially more Advanced that those of humans. If we accept this as 'true', and thus humans are not at the top of the intellectual universe and that different living beings observe the world in a different way, how can we get access to the absolute Truth that is disassociated from species-specific 'self-experience'?
Truth is not a dogma. This is the path to the correct understanding and correct reflection of the surrounding reality.
Human capabilities are limited. Here it is important to critically evaluate scientific truth. I conducted a historical and logical analysis of the development of physics.
I am surprised. There are so many absurdities in science that are presented to us as the truth in the last instance.
Science is good enough to solve certain problems without having access to underlying mechanisms or the detailed expressions of phenomena.
Example:
If two phenomena X and Y are tightly correlated without having access to the underlying mechanism you can predict the expression of phenomenon X (can be anything) based on the expression of phenomenon Y (can be anything)? In statistics you look at patterns, not at the underlying mechnaisms of patterns.
When you communicate with words in science, you are dealing with classes of phenomena summarized in words. But a class reflected in a word is not more than an 'idea/concept', not 'reality' per se?
T.A.'s definition: "There is no exact definition for anything" is a nice kernel from which to expand a bit: perhaps "The can be no exact definition of any thing, but there can be, from Homo sapiens's viewpoint, particular working definitions for every thing"?
A working definition is the definition of a phenomenon that takes the constraints of tools (e.g. measuring devices) describing the phenomenon into account?
Concerning materialistic philosophy, if you see on a radar a Moving target that should have a very high speed (e.g. 3000-12.000 km/hour) based on the characteristics of the radar and that also instantly changes directions or suddenly can stop also based on the characteristics of the radar, what do you conclude? It must be real, but we currently do not have to Tools to make these kinds of objects?
M.M.L. Your second paragraph brought to mind the 1950s SciFi movie "The Thing!"
Nick Pope describes a report of 'The Thing'
https://www.youtube.com/watch?v=XgMv2sI6CU0
Just imagine for a while that humans are somehow like 'ants': They live their lifes and and the same time ignore the 'reality' surrounding them?
The Latest: Landmark Change to Kilogram Approved
Nov. 16, 2018
VERSAILLES, France — The Latest on a scientific meeting on how to define weights and measures (all times local):
1:35 p.m.
The international system of measurements has been overhauled with new definitions for the kilogram and other key units.
At a meeting in Versailles, France, countries have voted to approve the wide-ranging changes that underpin vital human activities like global trade and scientific innovation.
The most closely watched change was the revision to the kilo, the measurement of mass.
Until now, it has been defined as the mass of a platinum-iridium lump, the so-called Grand K, that is kept in a secured vault on the outskirts of Paris. It has been the world's one true kilo, against which all others were measured, since 1889.
It is now being retired and replaced by a new definition based on a scientific formula. In their vote, countries also unanimously approved updates to three other key units: the kelvin for temperature, the ampere for electrical current and the mole for the amount of a substance.
https://www.nytimes.com/aponline/2018/11/16/world/europe/ap-eu-france-updating-the-kilo-the-latest.html
M.M.L.: "ants" : that comparison is a standard opening statement in almost every cartography textbook ever written: "ants on a carpet, people in the Earth-surface" [aka: bioshell of the planet].