Goldratt's thinking processes have been used extensively in various industries globally to identify core problems, develop win-win solutions and structure implementation plans that reduce resistance. In business research, high powered statistical techniques and operations research techniques are required for publishing in quality academic journals. In the hard sciences logic and gedanken exercises are frequently used to prove hypotheses. Are there references for using logic in academic soft science research? Are there other accepted forms of applied logic in the academic literature?
James -
We humans are 'funny' about logic. We often give it a little of the respect it is due, realizing that logic plus creativity equals good decision making. However, in practice, it is too often trumped by office politics. So whether or not a decision maker will take advantage of good research is problematic. Consider the odyssey of W. Edwards Deming. Since you are looking for references, you might try reading some of Deming's work in quality management if you are not already familiar with it.
But one would at least expect that theoretically, logic would always be the backbone of any research methodology. However, though theory is much more logical than practice, in my experience, the literature leaves much to be desired, as well.
Statistical science should be very logical. After all, it is a subcategory of mathematics, and without strict logic, mathematics would be inconsistent and largely useless. Statistics are supposed to be employed to enhance understanding. But individual preference, often illogical, will often enter into the research itself. And from my experience, at least some research methodologists may fail to proceed logically, by default, because they do not understand fundamental principles upon which the research should be built. I have seen a lack of understanding of basic principles cause people to convolute their methodology more and more over time, trying to make improvements by patchwork application of a variety of nonsense, until they have a very complex process that does not work well, when a much simpler, more logical solution is readily available. What I have in mind specifically there is the conduct of official statistics, using establishment surveys, but I know that there are other cases.
Consider the p-value in statistics. It has been used most illogically for over 80 years. People constantly read that they should set some level to be considered "significant," often an arbitrary 0.05, and often make decisions based on that. However, a p-value is a function of sample size. If your null hypothesis is very close to the truth, you can still 'reject' it by using a large enough sample size. If it is far from the truth, you can fail to reject it by using a small enough sample size. I have seen hugely expensive purchases based on "accepting" that test results proved specifications were met, when the sample size was so small, it did no such thing. If you are going to use a p-value, you need to do a type II error analysis (a power analysis), or at least some kind of sensitivity analysis. (For continuous data you are virtually guaranteed that a confidence interval, though there are technical difficulties in interpretation, is going to be a far more practically interpretable and useful decision-making tool.)
My first statistics textbook noted the importance of power analyses, and then, in subsequent examples, immediately dropped them. I expect you will find a plethora of references on significance/hypothesis tests, but a dearth of good ones. :-) There are papers written on very technical issues regarding p-values that one could say employ logic, but they miss the 'big picture,' and thus muddy the waters further for those who want to make good, logical and practical decisions.
Some people will use a power analysis to pick which statistical test they will use, but then fail to use it in the application. A sequential hypothesis test, however, will pick between two competing specific hypotheses, once a given level of evidence is obtained, and on average, takes a smaller sample size to do that. But they are seldom used as one cannot often budget for a sample size that is unknown until you reach your goal.
So single p-values are routinely misused in research, and I suspect that many textbook references exacerbate the problem.
As another example, one will see that in survey sampling, it has long been known that auxiliary data on a population can be of tremendous help in improving estimates from sample surveys of continuous data. However, history has a way of biasing many.
In the 1940s there was much resistance to any kind of sampling. See, for example, Brewer, KRW(2005), "Anomalies, probing, insights: Ken Foreman's role in the sampling inference controversy of the late 20th century," Aust. & New Zealand J. Statist., 47(4), 385-399. Many people thought that only a census was useful. Now that we know more about nonsampling error, we know that a sample can have less "total survey error." Of course researchers, in general, likely knew that long before their managers, in general. But eventually sampling has become accepted by most as a useful tool in collecting information. People realized the usefulness of randomized sampling. However, though the usefulness of auxiliary (regressor) data has long been known - see Särndal, C.-E., Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer-Verlang - there are many who steadfastly refuse to consider it. Note that a purely random sampling approach is good on the basis that if you took the sample many times, you would generally expect a basically unbiased result. But you only take one sample, so if you have correlated auxiliary data on the entire population, you can account for an 'unfortunate' sample, to a degree. (See https://www.researchgate.net/publication/274704886_Short_Note_on_Various_Uses_of_Models_to_Assist_Probability_Sampling_and_Estimation.)
If you are looking for a reference that shows the general need for logic in research methodology, I suppose you can find one. I recall evaluating a paper for someone who wrote a rather detailed, step-by-step process, but I found it too nebulous, and rather useless. It all seems like "common sense," but the real task is getting people to actually use any (ie, "common sense" that might be called logical).
So when you say "Can you provide any references that support the use of logic as a research methodology?" I think that any generalized references may , as a rule, :-), be too far removed from the unnecessarily messy reality with which we deal, and it might be more useful to consider a tale such as that found in the Ken Brewer reference above on Ken Foreman.
Thank you for an interesting question.
Cheers - Jim
Technical Report Short Note on Various Uses of Models to Assist Probability S...
I found it useful to look at what is not logic, and therefore not accepted in the academic literature.
http://en.wikipedia.org/wiki/Fallacy
As far as I know, all academic research is founded on logic. The development of new hypotheses, models and theories frequently occurs by trying to overcome logic inconsistencies within the current body of theory, such as contradictions, paradoxes and so on. Karl Popper formalized this evolutionary nature of science through what he called "critical rationalism", which is thoroughly explained in his seminal book The Logic of Scientific Discovery.
http://en.wikipedia.org/wiki/The_Logic_of_Scientific_Discovery
James -
We humans are 'funny' about logic. We often give it a little of the respect it is due, realizing that logic plus creativity equals good decision making. However, in practice, it is too often trumped by office politics. So whether or not a decision maker will take advantage of good research is problematic. Consider the odyssey of W. Edwards Deming. Since you are looking for references, you might try reading some of Deming's work in quality management if you are not already familiar with it.
But one would at least expect that theoretically, logic would always be the backbone of any research methodology. However, though theory is much more logical than practice, in my experience, the literature leaves much to be desired, as well.
Statistical science should be very logical. After all, it is a subcategory of mathematics, and without strict logic, mathematics would be inconsistent and largely useless. Statistics are supposed to be employed to enhance understanding. But individual preference, often illogical, will often enter into the research itself. And from my experience, at least some research methodologists may fail to proceed logically, by default, because they do not understand fundamental principles upon which the research should be built. I have seen a lack of understanding of basic principles cause people to convolute their methodology more and more over time, trying to make improvements by patchwork application of a variety of nonsense, until they have a very complex process that does not work well, when a much simpler, more logical solution is readily available. What I have in mind specifically there is the conduct of official statistics, using establishment surveys, but I know that there are other cases.
Consider the p-value in statistics. It has been used most illogically for over 80 years. People constantly read that they should set some level to be considered "significant," often an arbitrary 0.05, and often make decisions based on that. However, a p-value is a function of sample size. If your null hypothesis is very close to the truth, you can still 'reject' it by using a large enough sample size. If it is far from the truth, you can fail to reject it by using a small enough sample size. I have seen hugely expensive purchases based on "accepting" that test results proved specifications were met, when the sample size was so small, it did no such thing. If you are going to use a p-value, you need to do a type II error analysis (a power analysis), or at least some kind of sensitivity analysis. (For continuous data you are virtually guaranteed that a confidence interval, though there are technical difficulties in interpretation, is going to be a far more practically interpretable and useful decision-making tool.)
My first statistics textbook noted the importance of power analyses, and then, in subsequent examples, immediately dropped them. I expect you will find a plethora of references on significance/hypothesis tests, but a dearth of good ones. :-) There are papers written on very technical issues regarding p-values that one could say employ logic, but they miss the 'big picture,' and thus muddy the waters further for those who want to make good, logical and practical decisions.
Some people will use a power analysis to pick which statistical test they will use, but then fail to use it in the application. A sequential hypothesis test, however, will pick between two competing specific hypotheses, once a given level of evidence is obtained, and on average, takes a smaller sample size to do that. But they are seldom used as one cannot often budget for a sample size that is unknown until you reach your goal.
So single p-values are routinely misused in research, and I suspect that many textbook references exacerbate the problem.
As another example, one will see that in survey sampling, it has long been known that auxiliary data on a population can be of tremendous help in improving estimates from sample surveys of continuous data. However, history has a way of biasing many.
In the 1940s there was much resistance to any kind of sampling. See, for example, Brewer, KRW(2005), "Anomalies, probing, insights: Ken Foreman's role in the sampling inference controversy of the late 20th century," Aust. & New Zealand J. Statist., 47(4), 385-399. Many people thought that only a census was useful. Now that we know more about nonsampling error, we know that a sample can have less "total survey error." Of course researchers, in general, likely knew that long before their managers, in general. But eventually sampling has become accepted by most as a useful tool in collecting information. People realized the usefulness of randomized sampling. However, though the usefulness of auxiliary (regressor) data has long been known - see Särndal, C.-E., Swensson, B. and Wretman, J. (1992), Model Assisted Survey Sampling, Springer-Verlang - there are many who steadfastly refuse to consider it. Note that a purely random sampling approach is good on the basis that if you took the sample many times, you would generally expect a basically unbiased result. But you only take one sample, so if you have correlated auxiliary data on the entire population, you can account for an 'unfortunate' sample, to a degree. (See https://www.researchgate.net/publication/274704886_Short_Note_on_Various_Uses_of_Models_to_Assist_Probability_Sampling_and_Estimation.)
If you are looking for a reference that shows the general need for logic in research methodology, I suppose you can find one. I recall evaluating a paper for someone who wrote a rather detailed, step-by-step process, but I found it too nebulous, and rather useless. It all seems like "common sense," but the real task is getting people to actually use any (ie, "common sense" that might be called logical).
So when you say "Can you provide any references that support the use of logic as a research methodology?" I think that any generalized references may , as a rule, :-), be too far removed from the unnecessarily messy reality with which we deal, and it might be more useful to consider a tale such as that found in the Ken Brewer reference above on Ken Foreman.
Thank you for an interesting question.
Cheers - Jim
Technical Report Short Note on Various Uses of Models to Assist Probability S...
The book Scientific Method in Practice, by Hugh G. Gauch Jr., presents an interesting discussion on the role of logic in the scientific method. It gives some examples on the p-value controversy and on the bayesian interpretation of so-called "credible intervals". It is remarkable how the idea of testing null hypotheses is closely linked to Karl Popper's falseability. In contrast, the Bayesian interpretation is closely related to inductive logic, which was fiercely refuted by Karl Popper. In my opinion, the misusing of p-values has to do with the easiness with which hypothesis tests can be performed. Students are often taught the "standard procedure" of hypothesis test, it is only a matter of correctly following the script and you are done. I think few researchers pay attention to the implicit assumptions in the tests.
http://www.amazon.com/Scientific-Method-Practice-Hugh-Gauch/dp/0521017084/ref=sr_1_1?ie=UTF8&qid=1429465439&sr=8-1&keywords=scientific+method+in+practice
James, et.al. -
Attached is a link to a related question, which I think readers here may also find of interest in answering at least part of this question.
Cheers - Jim
https://www.researchgate.net/post/What_is_philosophy_of_mathematics_What_is_it_for/1
Thanks for the Popper and other references; I have read Popper in the past in a free pdf file. I am in the midst of rereading it in paper format (easier to make notes though I have never gotten through all of the appendices) to determine the linkages to Goldratt’s TP. Popper talks about simplicity; Goldratt talks about “inherent simplicity”, the causality of complex situations. The causal linkages make complex situations very easy to analyze and understand (communicate to others) to determine core problems and structure win-win solutions. Many of Goldratt’s categories of legitimate reservations (the rules of writing his effect-cause-effect trees) can be linked to Popper’s concepts. He has provided a common language applied logic graphic tool used to identify core problems (what to change), structure win-win solutions (to what to change), and develop detailed implementation plans (how to cause the change) BUT I am more concerned with the linkages in the other direction to being a valid research methodology. I am interested in references justifying causality (and logic) as a valid research methodology in studying individual, organization, management, supply chain, etc. “problems” to identify the “system’s core problem”.
In the 1950’s and 1960’s there were a number of references (see reference citations below) on using causality and the scientific method (and systems thinking) to analyze these types of problems in journals such as Management Science, Operations Research, Academy of Management Journal, Academy of Management Review, etc. Today these same journals require more complex statistical methodologies (with R2 of .15 or 2) or meta-analysis of previous studies. In the hard sciences one observation can establish a theory and one can invalidate the theory. In contrast in studying organizations, action research is frequently used but it has serious problems with generalizability due to sample sizes. It is viewed as anecdotal. In contrast, you can do a survey but you have to wait many years for the sample size to get large enough for statistical validity and then you wonder about the unspoken assumptions. So you see the dilemma of timeliness versus statistical validity. My injection (solution) to this dilemma is to use logic to strengthen the action research.
Certainly logic is useful in formulation of the hypothesis and is in common use here but I am interested in logic used in testing hypotheses as both Popper (prediction) and Goldratt (predicted effect) discussed. I have coauthored a few articles using necessary condition logic (to frame and determine the assumptions surrounding a core problem in production: traditional assumptions, lean assumptions, and TOC assumption) and sufficiency logic in studying a white collar problem to identify the core problem BUT these publications were in good journals but not great journals (Management Science, Academy of Management Review or Journal, etc.) I feel that establishing the use of logic (causality) as the research methodology is the missing link in getting into the top academic journals.
Thanks for your useful inputs.
Best regards,
Jim
PS: I just read James R. Knaub’s and Anselmo Ramalho Pitombeira-Neto responses to my question. Thank you for the detailed and insightful responses. I have long felt that many statistical surveys were dribble and lacked a solid research foundation. I approached an excellent statistician planned a study on using a current reality tree (effect-cause-effect diagram) of the multi-project management environment to determine the causal relationships that existed from the core problem to the policies, procedures, measures, and actions in that environment. I wanted him then to construct a survey based on that current reality. I then wanted to pilot that survey instrument with a half dozen different multi-project organizations to identify the unspoken assumptions that reduce the R-square. I felt this was what was meant by piloting a survey, not only the clarity of the instrument. My goal was to establish developing a structured logic model as a first step for constructing the survey instrument. I felt this should be the first and second steps in building and testing a survey instrument prior to sending it out. I have always had serious objections to a survey with an R-square of .15 or .2 no matter what the p value. You are missing an explanation of 80 to 85% of the variation in my mind. Hence you understand the motivation on establishing logic as a valid research methodology and later the basis of statistical research, though I probably won’t be around for that last conversation.
REFERENCES
1962. A note on asymmetric causal models. American Sociological Review, 27(4): 539-542.
Ackoff, R. L. 1956. The Development of Operations Research as a Science. Operations Research, 4(3): 265-295.
Boulding, K. E. 1956. General Systems Theory--The Skeleton of Science. Management Science, 2(3): 197-208.
Greenhut, M. L. 1958. Discussion Mathematics, Realism and Management Science. 314-320.
Helmer, O., & Rescher, N. 1959. On the Epistemology of the Inexact Sciences. Informs: 25-52.
Jermier, J. M., & Schriesheim, C. A. 1978. Causal Analysis in the Organizational Sciences and Alternative Model Specification and Evaluation. Academy of Management Review, 3(2): 326.
Johnson, R. A., Kast, F. E., & Rosenzweig, J. E. 1964. Systems Theory and Management. Management Science, 10(2): 367-384.
Lathrop, J. B. 1959. Operations Research Looks to Science. Operations Research, 7(4): 423-429.
Margenau, H. 1955. The competence and limitations of scientific method. Journal of Operations Research Society of America, 3(2): 133-146.
McCormick, T. C. 1952. TOWARD CAUSAL ANALYSIS IN THE PREDICTION OF ATTRIBUTES. American Sociological Review, 17(1): 35-44.
Regopoulos, M. 1966. The Principle of Causation as a Basis of Scientific Method. Management Science, 12(8): C-135-C-139.
Weinwurm, E. H. 1957. Limitations of the Scientific Method. Management Science: 225-233.
Jim -
Two little statistical comments: 1) R2, like a p-value, is inconveniently susceptible to influences that can make it misleading. Sometimes you can't beat a graphical display, particularly scatterplots, for communicating what the data has to say. 2) Statistics can show correlations, but not causality. You assume a model, and check it out with data, but not vice versa.
Are you familiar with Deming's quality management work? If not, I have seen a survey done in Greece just after WWII that he and some other well-known statisticians did that might be of interest, if I can find that reference again ... if you are interested. After that, he got into quality management, and when US industrialists failed to listen, he went to Japan, and soon "made in Japan" was no longer a joke.
I'm rather certain you can find a number of articles by Deming, though nothing new actually by him (but maybe by others on his methods) as he died at least 20 years ago, I think. There is a special Deming
Lecture at the Joint Statistical Meetings each year (JSM is sponsored partly by ASA).
Another old source would be John Tukey, who started Exploratory Data Analysis (EDA), which is just a bunch of common sense.
Not sure that this is exactly what you are looking for, but hopefully related.
- Jim
PS - I think the author of your first reference is missing.
In case you or anyone else reading this thread may be interested, I found that reference to a Greek population study that I mentioned:
Jessen, Raymond J., et.al. (1947), “On a Population Sample for Greece,” Journal of the Statistical Association, Vol. 42, September, 1947.
Deming, and I think Oscar Kempthorne, are among the authors, as I recall. - On more than one occasion, I found that article to be fascinating.
Instead of R2, you could look at the "variance of the prediction error." In econometrics, I think they often look at individual predictions. For totals, which i used from finite populations, there is more to consider. But if you are looking at individual predictions (not forecasting; this is for data missing from a current sample), you could use available software. I know that SAS PROC REG gives you the square root of the individual prediction error as STDI. If you have important predictions, you want them to be substantially bigger than their corresponding STDI estimates. - I imaging you could find something similar for forecasts.
I have successfully used Goldrat's- theory of constraints when fixing KRA for the organization, since 2010.
The prime goals of our organization is
Relevant publications of current (2015-16) KRA processing
Reference is very important that we choose
Should be chosen so that the reference value is high
Dear @James R., thanks for sharing the question. I like logic a lot. As @Ian has stated, I have used the same approach also. Beside, I have found an interesting reading about LOGIC. The author threats LOGIC in terms of Research Methodology and Research Design. "...In general (but not always), quantitative research methods are usually associated with deductive approaches (based on logic), while qualitative research methods are usually associated with inductive approaches (based on empirical evidence). Similarly, deductive-quantitative designs are usually more structured than inductive-qualitative designs." File is attached.
Jim -
Are you the coauthor of the "novel" summarized in the link attached below? Interesting concept. A friend of mine did something similar for his survey statistics textbook:
Brewer, KRW (2002), Combined survey sampling inference: Weighing Basu's elephants, Arnold: London and Oxford University Press.
Getting readers involved through a story is an interesting approach.
I suppose that finding "bottlenecks" has come a long way since the late 1970s when we made a chart of a hardware testing process, made "subjective" probability distribution assumptions for time to accomplish each segment, and ran software that identified a distribution of time to project completion. As I recall, such project management software may have helped identify bottlenecks in the process.
Jim
http://www.maaw.info/ArticleSummaries/ArtSumTheGoal.htm
Yes, in business studies, analysis of the business processes, trend and logic are often used as a research methodology. Some reputable journals publish such case studies like Harvard Business Review and INSEAD Case Publishing: http://cases.insead.edu/publishing/latest-case-studies. You may find in the web some books for case studies development.
@
James F. Cox 31.24
I am embarrassed by your post. We use logic in every day decision making. If one crosses road (in my country car driver are not as sensible as in the west even if a pedestrian crossing road from zebra crossing is not safe.), one has to take into account speed of car, distance from where one wants to cross the road, one's own walking speed, the one will decide whether it is safe to cross the road or not. All considerations are logically integrated in this decision making. In business research or soft science after identification of the problem, first step is to formulate one or several hypotheses to find out solution. The hypothesis is generally defined as an educated or informed guess of solution. This guess does not come from nowhere. On the basis of nature of the problem you have all information related to the problem and logically deduce what possible solutions may be. If one hypothesis is rejected, test other one and go on till one finds a satisfactory result/solution. I have a torch, my son informs me, Dad, it is not working (functioning), I formulate the hypothesis that its bulb has fused and ask my son get a new bulb. After putting a new bulb, my son informs me, it is still not working, I again formulate a hypothesis that due to exhaustion of battery, it is not functioning, I ask my son to fetch a new battery and after replacing the factory, it still does not work. It is beyond me, I shall take the torch to the mechanics, he will also formulate the hypothesis regarding accumulation of carbon on the connecting plate, shortening of connecting rod, dysfunction of switch, etc. and finally there will come a solution.
To form a hypothesis, based on logic, is the first step towards a solution. Whether one uses optimisation techniques in the work allocation and assigning machines to worker to optimise or maximise production or some statistical methods, all are logical means to a desired end. So-called scientific method involves logical steps and is developed applying logic
There are no research methods out of logic whatever the scientific field. Mathematical logic or pattern like or statistical one are all logical basis models. We should have, extract or develop data for analysis, inquisition, study,...and should use a modeling approach to assess, estimate, quantify, generate, ponderer, predicate, ...etc its behavior for further understanding, development, or investigation. inductive, deductive, inference, fuzzy, etc... methods are close to mathematical logic. However almost all models need exactness, or uncertainty reduction ie. need convergence in term of stability or disambiguation or semblance or evidence...
http://srmo.sagepub.com/view/keywords-in-qualitative-methods/n54.xml
http://srmo.sagepub.com/view/the-sage-encyclopedia-of-social-science-research-methods/n509.xml
http://www.esourceresearch.org/Default.aspx?TabId=545
http://en.wikipedia.org/wiki/Uncertainty
http://en.wikipedia.org/wiki/Convergence
http://en.wikipedia.org/wiki/Stability_theory
James, I found this two papers that propose the logical analysis and use of logic in qualitative research, originally related to sociology of human health but extensive to other soft sciences fields. Both are extensive papers but with very interesting point of view that can be taken in consideration and maybe you will enjoy for the richness of the discourse. I think that this paper focuses on the two questions at the end of your case presentation.
http://onlinelibrary.wiley.com/doi/10.1111/1467-9566.ep11343875/epdf
http://onlinelibrary.wiley.com/doi/10.1111/1467-9566.ep11343880/epdf
Best regards.
I think that logic can be used in management solutions to have a good diagonstic for problems but not in another speciality like statistic. We say that correlation don't implic causation.
Fuzzy logic technique has been used in literature as development of sustainability Index. I recently come across an article using this methodology quite effectively.
Regards!
Rashid
One of the major drawbacks of conventional expert systems is that they are largely text based and require some technical skills in using their often proprietary rule syntax. Logic Programming offers a versatile, extendable development tool based in logic and with access to the underlying programming language. However, even with it’s highly readable knowledge specifications language, still requires domain experts to read and write rules as individual items of text using a specialized syntax and remember the connections between them. To overcome these issues, a graphical environment where rules are simply defined by a combination of graphical shapes and pieces of text is used..
Decision Logic Charting Approach in Transfer of Property
http://www.cscjournals.org/manuscript/Journals/IJAE/volume1/Issue2/IJAE-10.pdf
Fine! Here is another application of decision logic framework in the problem of rural electricity network options to reduce bushfire risk! "Two decision logic flow charts were developed ...and are shown here for ‘green field’ applications and ‘brown field’ applications... Each can lead to selection of a network or a non-network (local generation) approach to reduction of fire risk from electricity supply assets and activities."
http://www.energyandresources.vic.gov.au/energy/safety-and-emergencies/powerline-bushfire-safety-program/swer-workshop-final-report-june-2010
I can refer you to trans-disciplinary work by Basarab Nicolescu and workers such as Max-Neef and Paul Cilliers. THE FOLLOWING IS BY BASARAB NICOLESCU - METHODOLOGY OF TRANSDISCIPLINARITY – LEVELS OF REALITY, LOGIC OF THE INCLUDED MIDDLE AND COMPLEXITY - Transdisciplinary Journal of Engineering & Science Vol: 1, No:1, (December, 2010), pp.19-38. I trust you will find it useful.
WoW! I certainly got several leads to answering my question. Let me try to respond to as many as I can in this correspondence.
First to respond to James Knaub’s question: I am not the coauthor of The Goal written by Eliyahu M. Goldratt and Jeff Cox. I am James Cox. Jeff is a professional business novel writer hired by many authors to assist in writing. He coauthored with Goldratt on The Goal (a novel about managing a manufacturing facility using TOC and system thinking); later with Dee Jacobs and Susan Bergland on Velocity (a novel on combining TOC, Lean and six sigma) and even later with Dale Houle and Hugh Cole on Hanging Fire (a TOC novel on project management). I am however a TOC advocate and have frequently used Goldratt’s thinking process with significant success. Yes, using the Socratic approach to teaching has proven very effective.
I must say I was a close friend and follower of Dr. Goldratt, a PhD physicist turned business consultant. He shared all of his knowledge freely with academia. He used his logic tools in his experiments in industry. He taught his logical tools for analyzing organization and complex situations to many many consultants and practitioners. Many large and small companies use his tools effectively.
Ian’s link on what is not logic is quite educational but doesn’t help in validating the use of logic in academic publications. It certainly helps me to avoid these pitfalls though. Thanks.
Anselmo, I have several books on science and the scientific method. This sounds like an excellent addition to my collection. Thanks.
Jim’s link to the discussion of the philosophy of mathematics is quite insightful particularly when viewed that mathematics is the basis for all quantitative analysis. In contrast qualitative analysis is still struggling for that similar basic; maybe logical analysis is what is needed as a legitimate basis. Kathleen Eisenhardt's publications on case research was a start in legitimizing case study research. Jim also thanks for your reference to the Greece population study; I will try to track it down through our library.
Krishnan since you have used Goldratt’s theory of constraints you know of his Thinking Processes logic tools and how various organizations have used logic diagrams to show the simplicity of the causal relationships in very complex organizations. Thousands of organizations have used these tools but little is written in the academic literature about their use. Hence I see the need for references in top academic journals on the use of logic in qualitative analysis. Thanks for the references.
Mahmoud, I am familiar with the Harvard Business cases and other cases for classroom use. I have frequently used cases in teaching but I am focusing on references to support logic as a research methodology in analyzing an organization. You might say I am trying to strengthen the way we analyze qualitative research.
Mohammad, I apologize for embarrassing you. From reading your response I don’t think you understood the question. Your examples of logic are interesting but certainly wouldn’t be useful in providing references in top-level journals. Please look at Goldratt’s novel, IT’s Not Luck for examples of his necessity-based logic (page 11 and 22) to solve a problem and his sufficiency-based logic (pp111-130) to understand the causality that exists among "problems" and the core problem in most organizations. As one can see the situations are more easily communicated and understood using his tools for logical analysis. If you read the introduction to The Goal, Eli discusses the use of the scientific method and having the mind of a scientist in studying our reality.
Fairouz, thanks for the web linkages; they should be helpful in the discussion of logical analysis as a basis for qualitative research. I will have to go through my university to search the sites.
Federico, I have downloaded the two papers you recommend and both look quite useful. I think a 1990 paper by Williams might also be useful.
Sayed, I agree that logic is nothing more than common sense; by as Goldratt said on many occasions, common sense is not common practice. When individuals lay out a solid logical argument most listeners say: it's just plain common sense. Many people and organizations do not implement common sense, particularly where one is in a functional position in an organization. He/she sees only his/her view.
Ebrahim, I agree. It is hard to run statistics on a sample of one. BUT you have to make decisions and a logical foundation is quite necessary in complex environments. So back to a question: why isn’t logic recognized in academic research in studying complex problems.
Rashid, I could not connect to your link to Review of Management Science Volume. Sorry.
Krishnan, I had not thought of the link to expert systems and AI. Very interesting! The application to transfer of property is quite interesting.
Ljubomir, your Figure 6: Causation chain framework in section 2.1 Causation chain and risk management frameworks looks quite interesting. It seems to be similar to a high level current reality tree on the top part of the diagram and a future reality tree showing actions to trim the negative effects on the bottom side of the figure. This provides a high level view of the problems and potential actions for each area. It might be an interesting analysis to construct a current reality tree of the problem and its future reality tree showing the solution then the prerequisite tree showing how to implement the solution.
Sydney, I will have to track down my librarian to find this journal. This looks like a great reference. Thanks.
Best to all of you. You have helped tremendously. Thanks.
Dear James,
I am firstly agree with James Knaub, as the base of statistics and math is logic.
In short I think we first should explain what is LOGIC itself. Totally what logic and logical methods need is "intelligence". So, the more intelligent the person the more logical is the method he/she uses and the decision he/she makes. And as far as intelligence is different between persons logical methods differ too.
We all know that main thing that needs logic is Decision. So, based on his/her logic a person can decide more powerful than other and there is no perfect decision and the best is one that have the least drawbacks. If we look in all companies, countries, universities, etc, we see that decisions are made in a Council. Hence, logic not only need intelligence, it needs several types of intelligence, because each consider the problem from different viewpoint.
Best logical decision is the one that involve every aspects and consequences of a decision equally to its value. This is the heart of logical thinking. References on logics can only teach us to think logically in general and what do we consider to act logically.
Dear Professor Cox,
this question is more philosophic rather than scientific, I believe.
Thanks to the great philosophers before us, 'logic' has become a well-defined human-made system. We could easily determine if a statement is valid or not, logically.
About the soft or hard science, I do not think there is necessary to distinguish them. I personally believe in Karl Popper's Falsifiability criterion. In brief words, if a theory or a statement is falsifiable, i.e., if there is a counter-example exists, the theory could be proven to be wrong. Which satisfies the criterion, we call it scientific. Otherwise, it is not.
Kuan-Wei