Is there something about what is researched or how the research is presented that is slowing the adoption of the best research into practice?
First of all, I want to thank Magel Kenneth for raising important question that is very actual today. Actually, this question related closely with the other question "What should researchers know about software development practice?" of Magel Kenneth. But, as appears to me, both these questions are related closely with the problem "Research-Practice Gap" too. To express my thought clearly I want to cite a phrase from "Donald Norman: Designing For People" http://www.jnd.org/dn.mss/the_research-practice_gap_1.html
"There is an immense gap between research and practice. I'm tempted to paraphrase Kipling and say "Oh, research is research, and practice is practice, and never the twain shall meet," but I will resist. The gap between these two communities is real and frustrating ... ". Please, have a look on this link. This note expresses the opinion of a well-known scientist. Now I want to pay your attention on a point "Translational Development ". I want to cite some important phrases from this point: "Between research and practice a new, third discipline must be inserted, one that can translate between the abstractions of research and the practicalities of practice. We need a discipline of translational development. ... This intermediate field is needed in all arenas of research. It is of special importance to our community. We need translational developers who can act as the intermediary, translating research findings into the language of practical development and business while also translating the needs of business into issues that researchers can address. Notice that the need for translation goes in both directions: from research to practice and from practice to research. ... There is a huge gap between research and practice. To bridge the gap we need a new kind of practitioner: the Translational Developer. The gap is real, but it can be bridged." I am a follower of Norman'idea about the "Translational Developer". In my view, this idea must initiate a new profession named the "Translational Developer". I want to explain my thought. At present we have two types of people:
1. Researchers
2. Developers
Researchers never want know about problems of developers ( "... some researchers just get satisfied with just reading about developers' requirements": phrase from Waleed Al-adrousy) and developers don't read research papers ("Another point on why developer do not use the latest research in Software Development is because 95-99% of developer do not read scientific papers": phrase from Martin Holzhauer). I offer to consider another scheme:
1. Researchers
2. Translational Developers
3. Developers
The translational developers can become a Community to understand problems of both Researchers and Developers. My opinion is the following: It is necessary to leave alone Researchers and Developers. Let Researchers create their algorithms (theories, approaches, methods, etc) and Developers realized these algorithms using modern technologies (ASP.NET, C#, SQL Server, etc) to produce commercial products. The translational developers must solve all problems between researchers and developers. In my opinion, we can consider a scheme:
Step 1. Researcher's Offline Tool
Step 2. Non-Commercial Online Tool
Step 3. Commercial Online Tool
The Offline Tool is a Researcher's program in some computer language (e.g. in C++) that was written by a reseacher. The Non-Commercial Online Tool can be a development (e.g. in ASP.NET with C#) that must be created by Translational Developer from Offline Tool. The main goal of such development is to be accepted for researchers and practitioners to solve their tasks from a server in the Internet. The Commercial Online Tool can be a development that must be created by Developer (or a group of developers) from Non-Commercial Online Tool. The goal of such development is to become a very attractive commercial product for all users. Each Translational Developer must be a good specialist in a problem area. The role of Translational Developer is:
1. Understand a research problem very well
2. Transform researcher's algorithms from Offline Tool to operators (e.g. in ASP.NET with C#) in order to create a non-commercial site in the Internet
3. Find a Developer (or a group of developers) in order to create a Commercial Online Tool from the non-commercial site Thus, the Translational Developer will speak with Researcher and Developer in understand languages.
Some researches are based on too many assumptions due to limited research resources. Sometime, too many assumptions can lead to unreal results in real world.
Also, many researchers aren't developers and they ask research questions which answered in any way wouldn't really have value or practical use for developers.
If you look back over the years much reasearch has come and gone and is not missed. The economic imperative dominates developers - computing researchers tend to concentrate on what is possible not what is viable. As primarily a private sector developer much of the research work being done in general computing would not help me earn a living and often would add to my costs. Perhaps the academic community need to look at this?
PS I agree with both of the previous answers
If pure economic gain was the only thing developers are interested in, John's way of putting it would be a bull's eye answer. However, I do believe that there are developers whose motives are not just to develop directly profitable solutions, but even solutions that would indirectly benefit them in more then one way (other then just economically).
Look at the open source projects. Companies that develop only for pure profit say that these open source projects are a waste of knowledge and are run and stimulated by people who have no entrepreneurship knowledge and run on a, to them, broken ideology. Also, they advocate that proprietary software is actually driving the software and hardware development.
[ a bit off-topis ]
There have been researches (University of Bologna, Hanyang University, National University of Singapore, etc.) that deal with many aspects of both approaches and not one has shown that non-profiting development is holding down the actual growth, but instead, those researches have shown that open source and non-profit and non economical gain oriented development is the drive force of innovation, security and the key player when it comes to spreading technology and open source to different areas of life through giving "open" knowledge and supporting development of embedded systems in education, government (University of Munster implementation), medicine (FH Wiener Neustadt implementation of open source), smart house systems and many other fields for which economy driven companies that are in the business of software and hardware development never used to show interest unless there are very big profits guaranteed to be made, whereas open source developers do it as a form of charity, hobbies, non-profit campaigns, good will etc.
[ / a bit off-topis ]
The reason some developers (open source supporters) do not consider best research is because those researches are usually requested (ordered for) those studies are big companies and those studies don't always deal with what is best for the developer or the client, but for the company for which researches have been conducted.
Open source software may be free but it is not without cost. There are overhead costs in its use eg:
1) The cost of incorporation
2) The cost of assimilation
3) The cost of Support
4) The loss of control over development direction
5) The vulnerability to other's failures
6) Cost of finding and choosing (ie number of solutions)
This list is not complete. If you assess the overhead cost in using Opensource then many companies rightly reject some products. If the pedigree is OK and support is likely then it may be viable - but there will still be many costs!
I have just become familiar with OpenCL it has taken me months - there is the cost.
I don't think the issue of Opensource is the significant. Nor do I (a developer) measure profits. Its time that matters. The investment in understanding takes time. Research takes time, how do we know its worth it? That time includes me earning a living. Do a search on any topic - get back a million entries - do you read them all - go to the forums see what everyone else is doing. Having done all of this ask yourself - What's best ? If you wait long enough you will see...
anecdote:
I recently threw away 10 years of Computing Journals - I had meant to read them. Through three house moves they went from loft to loft. In the end I had to admit that I would never find he time so they went for waste paper scrap - I bet I'm not the only one who has done that.
The volume of research available obeys a kind of entropic analogy - as it expands its ability to be useful diminishes.
Another issue is teaching the best software engineering practices to students. Students comming out of unverisites often end up learning almost the opposite of good software engineering practices. While some schools do teach software engineering as a class, the students get limited experience working with others on large projects.
http://dl.acm.org/citation.cfm?id=1839601
I agree that what is researched is generally not very helpful to many developers of software. However, there is a large problem with presentation.
1) How is a developer to know what the 'best' software development research is telling them? If I read published papers, they generally contain a lot of hand waving arguments, generalizations, personal anecdotes, and a lack of solid conclusions. So, first of all, the research papers generally don't help, so why would an overworked developer spend more of their free time to attempt to find a few gems in the morass of unhelpful papers?
2) The terms used in the software industry are not well defined. For example, list the differences between structured development, object oriented development, and model driven development. You'll find that to be a very difficult task, because many researchers disagree between the definitions of each of these terms.
3) Software developers are forced to be generalists, unless they work at a very large company. Keeping up a diverse skill set is difficult, which explains why certifications like Microsoft.Net or Sun Developer certifications are useful. These certifications provide a one stop shop for giving the software developer an ability to address a specific range of problems. However, I am not aware of any one stop shop for the dissemination of so called 'best research on software development.' That means, to find that information takes time, and when it is found, new questions crop up such as (i) How long will it take to learn this method?, (ii) Do I have the necessary skill set to use this method?, (iii) Will the benefits of the method outweigh the cost of learning it?, (iv) Will this method be effective for me and my work at this company? Now, has the author of that research paper give me the answers to my questions, or must I devote more time just to answer these questions.
I was a software developer for 9 years before going back to get a PhD to do research. The above items describe my experience with attempting to discover what the 'best methods' in software engineering were at that time. The ultimate conclusion I came to was that the cost of finding better methods by looking through IEEE or ACM magazines and research articles is larger than the actual benefits most of the time.
Another point on why developer do not use the latest research in Software Development is because 95-99% of developer do not read scientific papers.
The "normal" knowledge acquisition is:
- reading books (mostly written by other developers)
- reading weblogs written by other developers
- reading at bulletin boards and knowledge bases about programming
So i would say the one problem is how research is presented to the "developers world"
First of all, I want to thank Magel Kenneth for raising important question that is very actual today. Actually, this question related closely with the other question "What should researchers know about software development practice?" of Magel Kenneth. But, as appears to me, both these questions are related closely with the problem "Research-Practice Gap" too. To express my thought clearly I want to cite a phrase from "Donald Norman: Designing For People" http://www.jnd.org/dn.mss/the_research-practice_gap_1.html
"There is an immense gap between research and practice. I'm tempted to paraphrase Kipling and say "Oh, research is research, and practice is practice, and never the twain shall meet," but I will resist. The gap between these two communities is real and frustrating ... ". Please, have a look on this link. This note expresses the opinion of a well-known scientist. Now I want to pay your attention on a point "Translational Development ". I want to cite some important phrases from this point: "Between research and practice a new, third discipline must be inserted, one that can translate between the abstractions of research and the practicalities of practice. We need a discipline of translational development. ... This intermediate field is needed in all arenas of research. It is of special importance to our community. We need translational developers who can act as the intermediary, translating research findings into the language of practical development and business while also translating the needs of business into issues that researchers can address. Notice that the need for translation goes in both directions: from research to practice and from practice to research. ... There is a huge gap between research and practice. To bridge the gap we need a new kind of practitioner: the Translational Developer. The gap is real, but it can be bridged." I am a follower of Norman'idea about the "Translational Developer". In my view, this idea must initiate a new profession named the "Translational Developer". I want to explain my thought. At present we have two types of people:
1. Researchers
2. Developers
Researchers never want know about problems of developers ( "... some researchers just get satisfied with just reading about developers' requirements": phrase from Waleed Al-adrousy) and developers don't read research papers ("Another point on why developer do not use the latest research in Software Development is because 95-99% of developer do not read scientific papers": phrase from Martin Holzhauer). I offer to consider another scheme:
1. Researchers
2. Translational Developers
3. Developers
The translational developers can become a Community to understand problems of both Researchers and Developers. My opinion is the following: It is necessary to leave alone Researchers and Developers. Let Researchers create their algorithms (theories, approaches, methods, etc) and Developers realized these algorithms using modern technologies (ASP.NET, C#, SQL Server, etc) to produce commercial products. The translational developers must solve all problems between researchers and developers. In my opinion, we can consider a scheme:
Step 1. Researcher's Offline Tool
Step 2. Non-Commercial Online Tool
Step 3. Commercial Online Tool
The Offline Tool is a Researcher's program in some computer language (e.g. in C++) that was written by a reseacher. The Non-Commercial Online Tool can be a development (e.g. in ASP.NET with C#) that must be created by Translational Developer from Offline Tool. The main goal of such development is to be accepted for researchers and practitioners to solve their tasks from a server in the Internet. The Commercial Online Tool can be a development that must be created by Developer (or a group of developers) from Non-Commercial Online Tool. The goal of such development is to become a very attractive commercial product for all users. Each Translational Developer must be a good specialist in a problem area. The role of Translational Developer is:
1. Understand a research problem very well
2. Transform researcher's algorithms from Offline Tool to operators (e.g. in ASP.NET with C#) in order to create a non-commercial site in the Internet
3. Find a Developer (or a group of developers) in order to create a Commercial Online Tool from the non-commercial site Thus, the Translational Developer will speak with Researcher and Developer in understand languages.
Very interesting idea about translational developers. Presently, this task is handled largely by startup companies started by researchers or their students, and by major company research labs. If we envision another group, the transitional developers, who would pay for them? Remember, government spending is likely to decline greatly in the coming decade.
Let us create an Association of Transitional Developers.
If this idea will be approved by community members, we can discuss details then. E.g., I can become a Transitional Developer in area of Combinatorial Optimization.
I really wonder if it would be best to be a part time translational developer and part time regular developer. In that way, you could maintain your credibility among developers and know what was best to try to transition from research to actual practice. Or one could be a part time translational and part time researcher. In that case, one could have a better grasp of the actual research and where it should be brought to the regular developers.
One of the problems with many researchers is the compulsion to classify everything. At each subdivision one can spawn a whole set of related ideas which in turn spawn more classes and more ideas. Soon the very number generates the proverbial babel of jargon and bloat.
Here is the key to being a good developer and good researcher.
Reduce the number of possible paths to a minimum. Don't allow others to fill your mind with spurious untested information and then you may have TIME to do your job.
And that is what is wrong with current research (particularly in the "unnatural" sciences). And it explains why much of it is not adopted. (answer to original question)
Its because of the reason that the researcher are separated out from the practitioners at the very start of their research. Moreover, the research presents their findings and solution in a language that is even not understandable to their fellow researchers most of the time then how can a raw software house guy going to understand that. This is mainly due to the the rejection of work by many good journals if research is presented in a simple style.
I want to offer concrete steps to reduce the gap between Researchers and Develepers. Magel Kenneth mentioned the notion "Transitional Developer" (original Norman's notion was "Translational Developer"). I like the notion "Transitional Developer" as well as the notion "Transitional Development". Let us imagine the following hypothetical situation. Assume, the ResearchGate stores N1 = 100 * n papers linked with the process of programming. This is the First Step as Research Offline Tools in different programming languages. The Second Step must produce the Transitional Developments (not Commercial Tools) from Research Offline Tools. How many can be number N2 of such developments? Let us assume that N2 = 10 * n, i.e. one of ten Research tool will be transformed to Transitional Developments. The Third Step must produce N3 of Commercial Developments from Transitional Developments. Let us assume that N3 = n, i.e. one of ten Transitional Developments will become the Commercial Development. Now we devote our attention to the
Second Step. A question arrives: What necessary property should have any Transitional Development? In my view, the such property must be an ACCEPTANCE to use any Transitional Development in the Internet. I offer consider the Transitional Developments as "Not Commercial Online Tools". Now I want to explain my thought in more detail.
Let's assume you was interested in a ResearchGate paper written by an author. Assume, you want to evaluate experimental results from his paper. I want to ask: why we would trust these results? These results were obtained for the author's data. But as to me, I would believed more if experimental results were obtained for my data. Assume now that you want to use this paper to solve my task, but the author's
program (Research Tool) is not accepted. What you would do now ?
In my view, there are some possible scenarios:
a) Send message to the author to solve your problem with your data
b) Ask the author to send you his .exe program
c) Study author's solution approach in order to write your program using the author's approach
d) Develop your solution approach and program to solve the task
If you are a practitioner then you can count only on the points 'a' and 'b'. But the points 'a' and 'b' (especially) don't garantee you will solve your problem. If the author will be kind enough to you, then you will solve your problem. But in any case you would not ask the author to solve your problem for a long time. If you a researcher you can count
really only on the points 'c' and 'd'. You must choose what is better: 'c' or 'd' ?. As to me, I would choose the point 'd'. I don't know, maybe I'm wrong, but I would try solve the problem using my own approach. Or there can be other problem: How to compare author's results ('A') to your result ('B') for same data ? What you would to do ? Would you develop a program PA (using author's approach) in order to get results
'A', then compare 'A' to 'B'? As to me, if the author's algorithm is not very complex then I would develop the PA in order to get results 'A'. Thus, there are many problems arrive. The main reason is that author's tool is not accepted to be used for people. In my view, this disadvantage can be removed by the use of Transition Development.
The main feature of Transition Development must become an ACCEPTANCE. The Transition Development must have three necessary properties as:
1. Acceptance in the Internet
2. Reliability of Output Results
3. Comfortabale User Interface
in order to become a start for Commercial Development. Thus, I propose to change the "Paper + Unaccepted Offline Research Tool" to the "Paper + Accepted Online Transitional Development". Currently, we can read papers only in ResearchGate and can not use Offline Research Tools to solve models for our data. But would be
better if we could not only read papers but we could to solve models for our data too using Accepted Online Transitional Development in the Internet.
I want to resume some useful properties of Transitional Developments. Assume, you present your report at the conference using both PowerPoint file and Offline Tool from your local computer or USB flash drive. I propose to present both PowerPoint file and Transitional Development from the Internet. Presenting your Transitional Development results, you will get full trust from your colleagues that will know about your Transitional Development link before. Your colleagues will be interested to know about your solution approach better if they will use your Transitional Development from the Internet before to solve your models for their
data. As to me, traditional presentation of research as only papers exhausted itself. Now in 21 century, we can view any video, interview, film, etc from the Internet. But so far we can only read papers as files in different formats. I think there came a time to change our point of view.
In conclusion, I propose to create any start-up structure (e.g. DevelopmentGate) to accumulate any Transitional Developments on a server from any papers in the ReserchGate (or from other sources). Please, share your opinions.
Gennady,
Take the time and effort for one set of results(paper ) multiply it by a plausible number of papers per hour add the time to assess the benefit and to ensure it will be of value. Cost the effort to productise it and then put it into the capricious market .
Thus: Time then Effort then Cost all have to be prove-ably beaten by benefit. The greatest of these is TIME
In my experience, developers don't use the best research because there's a mismatch between the research and their day to day life. Developers with more autonomy may adopt practices and such from cutting-edge research, but for the most part, devs work for businesses, and businesses are more interested in ROI and such. There has to be a level of practicality in the research that will immediately and positively impact the developer's work.
This is the most depressing issue in the industry today. Developers have been deviated from the industry floors. That makes them less aware of the user perspective and their requirements . Heeks R. (2005) define this as the ‘Design reality gap’. In his view, hybrids (experts in software development and practice) are the solution, I believe, Gennady’s ‘Transitional Developer ‘is similar to this concept. However, in order to be successful, System development should be a continuous process and should be adaptable to the changing organisational context, hence, it is much bigger problem that we think.
I'll say it once more:
I have been a developer over 30 years. I try to keep up. There is not enough time.
PS to survive as a developer you must continuously adapt to latest computer fashion
I want to outline the main goal of Transitional Developers. The main goal is to prepare infrastructure as a Not Commercial Transitional Development to start a Commercial Development. What do I mean ? So, we have three stages: Research Tool - Transitional Development - Commercial Development. Assume, a researcher developed his Research Tool in a language as C++, VB, Pascal, etc. Researcher used the data structures (DS1) to develop an algorithm A1 to solve his problem. This algorithm is effective only for his programming environment. I want to say that researher's programming environment is ideal for algorithm A1. Now we need transform Research Tool to Transitional Development (e.g. in ASP.NET with C#) on a server. But here we have quite other programming environment. There are three possible ways:
a) Take the A1 code in C++ (VB, Pascal, etc) and adopt to the server programming environment, i.e. transform A1 code to the C# code
b) Take only algorithm A1 and write a C# code for A1
c) Develop another data structures DS2 and a new algorithm A2 for researcher's problem, then develop a code in C# for A2
As to me, I will choose between the points 'b' and 'c'. The Transitional Development will contain e.g. about 90-95% of logic in C# and 5-10% of design in ASP.NET. In the third stage we must transform the Transitional Development to the Commercial Development. In my view, the logic in C# can be used without any essential changes, but design can be updated essentially. Thus, we can receive about 5-10% of updated logic in C# and 90-95% of new design in ASP.NET. I want to offer the musical associations for the three stages. Let's imagine that the Research will be like a MELODY, the Transitional Development will be like the NOTES for piano, and the Commercial Development will be like a SCORE for symphony orchestra. At first, the Transitional Developer must hear the MELODY and then write the NOTES. Then, the Transitional Development together with a group of professional developers (Web designers, Information designers, GUI designers, UX designers, Game designers, and others) must write the SCORE from the NOTES. Thus, the NOTES must become a link between MELODY and SCORE.
Now I want to propose to implement the following thing. Please, present me any Research Paper that I will transform to a Transitional Development. I would wanted to see such paper from area of my interest (e.g. Combinatorial Optimization). If you accept my proposal I will inform you about all steps in detail from beginning up to end. The goal is to debug the conception "Transitional Development" on the concrete examples in order to create a methodology (approach, theory, etc) for any recommendations that can be used in practice. With this purpose I created a Linkedin Group "Transitional Developers Association" (TDA) to present a detailed report about all steps. If you share this idea, please join this group.
The discussion is really interesting and quite informative ... Which helped me broaden my knowledge about the problem being faced... I also wrote an article about the issue on my blog "http://masteringresearch.blogspot.com/" ... But till now i was thinking that the problem is just faced in my country but now i can say that the problem global ...
The answer is quite simple and easy.
Any one can conduct research and describe the best practices in just a paper by critical review of just research papers. 90% of the research lack the testing and implementation of the best practices in software development life cycle. A dozen of research paper states practices but when in real time environment. No one even follows the simple waterfall methodology. Even if they try to implement the best research in development. Then there is lot of risk as there is no time for testing a methodology to make a software when the time and money is crucial in current environment. No one take risk.
Gennady Fedulov provided excellent answers and I don't have much to add here,. I definitely agree with him in many points, and really liked his suggestion to reduce the gap between Researchers and Developers. However, in my opinion, besides maturity, in relation to this subject, there is much influence of the human nature in reacting to changes. Developers are used to do something and when something new appears, even very proactive people has a certain degree of reaction. Maybe this is not the most important factor, but I guess it has a considerable impact in the tendency of developers take so long to start using the novelties developed by the researchers.
If you look at this discussion as a whole you can see why academic prescription tends to fails. Many different views some proposals but no evidence. Much research is minute in scope, some of it will creep into products but not dramatically - usually as an extra to an existing product - ie part of a salespitch. In the early days of software development with a new tools/procedures out every day and everyone pitching to sell their wares - we were overloaded - so much has completely vanished. What survived rarely delivered and so we can ask where is The JSD JSP, Yordon, ISAC, Hos, SSADM, SDL, Software through Pictures, CCS, Z, VDM Hope Mirander, Prolog, Algol, GENOS etc etc - yes some still hang on but not in the mainstream. If a researcher has a brilliant idea - and can prove it - take it to a venture capitalist and persuade them - let the market decide. It it succeeds the developers will follow.
Frankly speaking practitioner doesn't practice with latest research due to
research-practice gap, industry lacks the real practical research in reality because developers don’t want to experience with new methodologies because its risky for his organization.
In my humble opinion and to keep it simple, reasons include: 1) Most often research lack testing in real environment. 2) Published work is bounded by limited number of pages which make it hard to explain and give example to the reader which discourage practitioners from trying the new solutions. 3) The most important one is “Time to market” and the need to deliver fast make trying new ideas more risky.
In my view, it is impossible to be expert in all the fields. Most of software developers are not researchers and most of software research have no knowledge of programming. Best example if applied field like chemoinformatics, bioinformatics, most of subject experts or researchers in the the field of biological/medical/chemical sciences, drug discovery have limited knowledge of computer. Even persons involved in scientific algorithm are either have no interest in coding or can not write program. This is important to write software or web server based on best scientific algorithm.
I want to thank Gajendra Raghava for joining to our discussion. Now I want to comment his phrase "This is important to write software or web server based on best scientific algorithm". I want to interpretate the notion "best scientific algorithm" correctly. I want to ask: Who must develop such best scientific algorithm? If I understood Gajendra Raghava correctly, such algorithm must be developed by researcher. Assume, that it is true. Then I want to ask the following questions:
What properties must have this algorithm? How did this algorithm use real-life enviromment? Is this algorithm accepted for practitioners to solve their tasks? Here we return again to our discussion about "Why don't developers use the best research on software development?" Using this opportinity I want to mention about my opinions too on the links:
https://www.researchgate.net/post/How_can_the_theories_of_computer_science_be_made_more_useful
https://www.researchgate.net/post/What_is_the_best_way_to_evaluate_a_researcher
I will give example of our group, we take important biological problem. We also take experimental data available, we trained our models on one portion of data (called training dataset) and predict on other portion of data (testing data). We evaluate performance of our method by comparing observed and predicted data. Ideally predicted and actual/observed data should match. This way we got performance of our model, then we compare our model with existing models and demonstrate that our algorithm is better in term of accuracy. This way we are addressing important practical problem and demonstrating out strategy (pattern recognition, feature selection, algorithm) is better than existing strategies. In order to provide service to community (actual user like biologist, researcher, drug/vaccine developers), we developed web server. This way users got web services based on best algorithm. In order to make our software/web service trust worthy, we send our manuscript/paper based on our algorithm/software to reputed scientific journals for peer review, so field experts can provide their view on our work.
In my opinion, Gajendra Raghava made a very strong "movie" by showing his example. My area of interest is Combinatorial Optimization. Therefore I was happy to see an intersection between Microbiology and Combinatorial Optimization (CO) in a thesis "Martin Paluszewski: Algorithms for Protein Structure Prediction" on
the link
"http://curis.ku.dk/ws/files/14772124/Pages_from_Martin_Paluszewski_Thesis_1-100.pdf
Now I want to comment Gajendra Raghava example from the point of view of CO. Once I want to apologize for my free interpretation of notions as "best algorithm", "strategy", and others.
__________________________________________________
1. Phrase "This way we got performance of our model, then we compare our model with existing models and demonstrate that our algorithm is better in term of accuracy"
If I'm understanding correctly, "our model" must be more relevant to reality than "existing models". As example, if "our model" (e.g. M1) is worse than a model (e.g. M2) of "existing models" then even the best algorithm for M1 can give worse output than any solution for M2. If I'm understanding correctly, the strategy is the "a model + best algorithm", then the best strategy is "best model + best algorithm".
___________________________________________________
2. Phrase "We evaluate performance of our method by comparing observed and predicted data"
In my understanding the actual/observed data will be an Optimal Solution (OPT). Assume, the best strategy produced an automatic output (predicted data) as an approximate solution A1. Then a metric q = 100% * (A1 - OPT) / OPT can be used in order to evaluate the qualuty of predicted data (A1) to the observed data (OPT). As I'm
understading, before the observed data were calculated manually. Now let's assume, that observed data (OPT) are unknown. Then a problem arrives: How to evaluate the quality of A1 if OPT is not known? This problem is very important in CO. But here we have matter with Microbiology. Here a question arrives: Can be found a
Lower Bound (LB) to OPT ?, where LB
Practice without theory is blind, but theory without practice is dead... There are a lot of reasons why practice does not adopt academic approaches. The main reason is lack of real value of the most techniques and "paradigms"... too complex, too fuzzy, too unpractical, lack of documentation and technical support... But I believe any really good method will be adopted by practice... Well, in some cases it needs time to switch... I mean so called ecosystem problem (e. g. it is well known that some "big players" promote their products directly in school, college, university, etc.)
Good answers. As an industrial software engineer (by day, researcher by night) I think the key points are:
- Most academic research is developed without reference to the problems that developers think they are facing
- Little academic research is tested in realistic environments
- A lot of research is abandoned in 3-5 years due to funding cycles whereas industrial adoption takes 5-10 years
- Very very few researchers attempt to package their research so it's ready for real use
- Very very few researchers put any serious attempt into communicating and marketing their research
If you're a researcher, consider this point: when was the last time you went to a mainstream developer conference like QCON or GOTO? That's the place that self-starting developers interested in new ideas might hear about your work.
I realise that the academic system causes a lot of these behaviours because citation is all and application isn't recognised as valuable. But this is why this situation is as it is.
I think, this problem is more general than just concerning software. As Dr. Fedulov was talking about the need for 'Translational Developers', we can also formulate the need for 'Transformative Researchers', from the side of science. They would operate in the front line of research and, in addition to exploring new insights about novel phenomena, they would also explore new principles for system/product/service development. There is a paramount need for doing this. Please just consider, we have learnt an extreme huge number of physical, chemical and biological phenomena that could be used as basis of, e.g., sensors in cyber-physical systems, and only a very small fragment of them has been utilized as such. For me, and for intelligence ecologists, it is wasting of knowledge. The role of 'Transformative Researchers' would be exploring affordances (of scientific knowledge) in (social) context and contributeing to an effective utilization. First of all, our science and design education should be adapted to this need.
I watched an NSF panel discussion once in which they tried to make it clear that the NSF would really like to see researchers transition their work to industry. During the question section a researcher got up and pointed out that the NSF does not provide funding for the purpose of transitioning research. He asked if this stance by the NSF meant that they would start providing funding so researchers could afford to spend the time needed to properly transition their work. The answer was roughly "we will get back to you on that."
It costs time and resources to market research, and even more to fully transition it into industry. The current culture rewards paper publications highly, and funds novel research, but there is little financial or cultural support for engaging in research transition activities.
There is a huge cost gap between "proof of concept" and "prototype" software and polished, testing, well-documented software ready for distribution, shrink-wrapping and sales. The typical research grant doesn't provide enough money to develop software that is ready for sale (nor is there really a market in most instances). A student/post-doc/investigator simply needs their program to work well enough to answer the scientific question that then the "answer" goes into the paper/thesis/grant. There are institutional requirements for advancement based on the number of papers/theses/grants, which are in opposition to good programming practices. Students and investigators do not get promoted based on good programming practices. Although they may, in rare instances where a program is widely adopted (and they used good programming practices). In other words, these institutional requirements for students/post-docs/investigators largely encourage the development of disposable code, unless your application is attractive or novel enough to have a large group of users.
To Terry Braun ... exactly. In fact, these days you don't need a polished product, you need a credible start with good tests and some reasonable examples and documentation. But your point is well made. There is a significant gap between a PhD student's PoC and an open source project ready for industrial use.
This is a very complex problem. A simple but somehow disappointing answer is that software professionals should adhere to the ACM/IEEE Code of Conduct by consistently investing time in continuously developing their personal knowledge of developments in the science base. It is disappointing because I doubt engineers and their employers look at this as essential to the vitality of their work. Employers of software engineers could arrange for weekly seminars over lunch or around team meetings for fellow engineers to tell one another about what they've been reading about recently. Then they would get team building for free as well as improving the intellectual capital of their human resources.
A more in-principle way to think about the problem is as follows: Science aims to address universal problems, constrained theoretical and empirical state-of-the-art knowledge; practice is all about particular problems, constrained by technical infrastructure and customer requirements. In principle, a scientist could look for candidate instances of their universals in contemporary practical problems and abstract away from the 'noise' of that particular setting. In principle, a practitioner could look for candidate part-solutions to their needs in contemporary scientific work by trying to match elements of their overall problem set onto relevant areas of CS research. However, the categories into which computer science research and practical software development are sorted are not well matched, the mechanisms for doing this in either direction are under explored, the opportunities for factoring such activity into either science or software engineering work do not exist in current process models, and the knowledge required to make it possible, even if category matching and process concerns were address, does not exist.
Software is a system. It is not a simple matter to assume that, for example, state-of-the-art interaction techniques are compatible with the technical environment into which a new development must be introduced. Matching might work in some way at a component level; it is anything but certain that it will make sense on integration.
The process constraints on software development do not reflect the process constraints that apply to scientific research. Project managers would have to explicitly cost in such activity and clients/funders would have to underwrite the bill.
Computer Science is now highly specialized: it is not easy for a specialist researcher in one area (e.g. machine learning) to appreciate the significance of work in another area (e.g. cryptography). So much harder then for the practitioner.
The translational scientist and/or transformative researcher are good ideas for new specialist roles but cannot succeed unless systems for their operation in the production of new science and of new software are also invented.
When it comes to software development in IT industry, there is a deadline attached to it.
1) That makes it very difficult to adopt a new methodology from recent researches(which may have not been proven with hundred percent chances of success).
2) Software development in IT is always based on previously developed software by that company.Every company has a domain on which they work. Clearly, the developers use analogy and start the development since it is well known to them and they feel comfortable doing that. Also, success rates are high with this strategy. Plus, it saves money on resources.
The best research in the field of software development strongly related to the research methodology adopted for evaluation of the research. Dear to come up with a best research in SE, first it is of great importance to pick research questions/topics that are really useful for practitioners and then the methodology has good impact on it. We should use Systematic Literature Review as a best methodology to reveal importance to our research generally and particularly in Software engineering .
The reason why software developers do not use finding from research in Software Development because they are not directly involved in the research. In order to come out with a good result, researchers should focused issues in software development from both perspectives: theoretical and practical. That why conducting action research or collaborative research with industry is important for bridging the gap and also knowledge integration between software developers and researchers.
Software developers work under demanding time constraints and the pressure that they face is the pressure to "complete this application, module, test, ..., etc." by a specific date/time. They are so busy and focused on completing the immediate series of tasks for their organizations that they rarely have the luxury to investigate the latest trends in software development. When a developer adopts something new, due to the demands on their time, any new thing which they adopt must be ready for immediate use and not require too much time to integrate into their current approach to software development.
If a new software development philosophy, tool, or trend developed by a (basic) researcher (computer scientist/mathematician) has been tested in the academic realm, but has not been further developed and modified to be ready for adoption and use by the developer (programmer/engineer/analyst) community, it will not be adopted because it will require a large amount of time and a great deal of mental effort to adapt the new "thing" for use in the industrial environment. As Gennady Fedulov indicated above, there is a current bifuraction between researchers and developers that must be bridged by a third type of professional that he calls a 'translational developer'. I completely subscribe to his model and the only difference that I have is the terms that I use. We do need to build a group of professionals in that middle tier to ensure a smoother transition between 'academia' and 'adoption'. I see the three tiers of software development professionals as follows:
1. Basic Researcher (Researcher)
2. Applied Researcher (Translational Developers)
3. Software Developer/Application Developer/Model Developer/Programmer/Analyst (Developer)
The applied researcher is someone who understands the environment of the basic research and understands the general trends, which research areas are most likely to have wider use, etc. Ideally the applied researcher is someone who has spent some portion of their career in the basic research arena. Also, the applied researcher knows what will work in the 'real-world' environment of software development and should also be someone with experience in the day-to-day world of the software developer, who understands the frustration of trying to fit a new method, philsophy, tool into an ongoing project with inflexible deadlines. There currently is no formal training for those applied researchers, and most of those with that skillset have been both researchers and developers (Fedulov's terms) during their careers. We need to identify those with that set of talents and develop formal career and training paths for them. This will encourage more developers into adoption of the latest software development research into normal work flow.
Why do people think that developers aren't already excellent researchers? The more you develop the more you have to extemporize. Then you refine your approach and become more efficient. Then some pundit (often with less experience) tells you to do it another way - frequently you are compelled by management to do it that other way and its no better than what you knew in the first place - you then run out of time ...and leaves you feeling a lot less charitable about researchers.
If you can prove your solution before selling it then we'll buy it.
Incidentally domain knowledge is often more significant than computing knowledge - I spend more time researching the domain for a job than researching software techniques - to do otherwise would lose me my job. (I am a developer and a researcher)
Because it is not always feasible with the given resources i.e. budget, time and other resources.
Eric Hall hit the nail on the head with his answer. I am not a developer, but work in the R&D team with developers. There is tremendous real pressure from Executive Management, Sales, and Marketing, to produce new software. Meanwhile, staff hiring has been stagnant or barely keeping up. Developers and everyone in all departments of software companies continue to be pushed to deliver more with less. I would say it's rare to have time for developers to research under this type of pressure.
In my field of CAD software, only Autodesk as far as I know offers a sabbatical after a certain number of years so staff can do deep dive research for a few months into a field of interest. Autodesk employees - please chime in.
Ardhendu
Planned research may not know what is needed - it tends to concentrate on budget and resources and salespitches of alleged research-targets. Developers stumble over what is needed and often solve it in the course of their task. They may not publish but over time they can acquire more knowledge than their purely academic counterparts. It is different way of arriving at the research experience but it can be just as productive. It is usually the case that they rarely have time for academic style research . (see Joseph's answer above)
I think there are several reasons for this. It would be too easy to just say "Because of costs". Also this is one reason, there many others. Very often, researchers simply are not aware of a better software, or they simply cannot decide on it because they have a very different expertise. Moreover, research projects last typically 2-4 years, sometime even longer. For an IT time measure, this is quite a long time and many new software or updates appear on the market. However, it was decided in the beginning of a research project which software should be used (which might have been the best one at this time), while later on the software will not be changed anymore although a better one might be on the market or in the research community in the meantime.
As it was said by Einstein : "In theory, theory and practice are the same. In practice, they are not"
The IT is a hard field for teaching. Teachers need to be trained for each new "good" technologie and it's not the case. As I can read above, a lot of projects are based on others, so the code too. Industry uses technologies the most used and learnt, it's more simple to find people able to work on it and to have a long term support.
The reason for this is that the vast majority of researchers do their research in order to write papers rather than contribute something useful to software development. They are forced into this by the metrics used in universities to assess ability i.e. papers published. Most research is never really tried n practice because this takes a long time and often generates negative results (ie this research doesn't really work). Not only does this depress the researchers, it also isn't that interesting for publication (though it should be).
Researchers cannot be blamed for this - they are smart people who work within their system - it's just that the system is badly broken and does not reward or encourage work that is long-term and practically useful.
I recently tried to apply software engineering research techniques on a large project (a system designed for > 1 million users). Not one of these techniques was useful in practice because they don't scale to large systems.
As many people have already mentioned, there are (at least) two important issues here:
a) There is a large gap between research and practice in terms of the topics of interest and the types of methods that are being developed. The people that have a software engineering background might be keen to criticize researchers for doing work that it too 'theoretic' and not very applicable. However, it seems to me that they are all missing an important point - if we were all to work exclusively on problems of clear practical relevance in a very direct and straightforward way - that would be the same as performing a sort of a "gradient descent/ascent" in the knowledge space - an optimization approach that is bound to be stuck at a local optimum. Yes, it converges faster than 'global search' - which means that we would get some good practical solutions faster than we do today - but it would be much more difficult to make any meaningful 'leaps' in understanding and technology - since this calls for freedom to experiment, open-mindedness and incentives that are not exclusively commercial and market-driven. Do you think that SVMs would have been developed if people hadn't been playing with theory and mathematics and were just focused on fine-tuning the 'existing' classification methods? There are many examples like this. Sometimes you end up making the most powerful practical impact without knowing what is going to happen in advance. This is the beauty of science. However, I do also agree that there are researchers who are not interested in having their methods used and who just publish for the sake of publishing - this is clearly a bad thing.
b) I have worked as a software engineer before and I will certainly work as a software engineer / researcher in the future and have met many people who work on developing practical systems in various IT companies. Very few of them care about what the state of the art is, very few of them read any research papers at all. Most of them just want to 'hack' something together as fast as possible and then maybe tweak it a little and incrementally improve it later on. However, this sometimes leads to a bad system design and it proves to be prohibitively expensive or time consuming to drastically change the underlying methods once the initial system is in place... and this results in a clearly suboptimal performance that - in the end - loses many for the company. The problem here is also dual - it is true that some of these guys don't really like math and many state-of-the-art approaches are quite heavy on math these days... but it is also the managers that don't understand the need to steer their engineers into reading the relevant research papers from the top journals and conferences or hire and consult some researchers if that is not possible.
In the end, I would just like to say that the current compromise that we have in Europe (EU funded projects where many institutes and companies meet to work on something together) - don't really work all that well. There are always problems with communication, synchronization and - most of all - quality control. Each party involved has its own objectives and desires - and the resulting software usually ends up being a mess, full of bugs and issues that no one will be there to take care of once the project has ended. Maybe it's not the concept that's wrong, but the execution - I don't really know - but we definitely need to try to bridge the gap between research and industry in the future.
Fundamental research and applied research couldn’t be split, if the first deal with mathematical aspect of new theories, laws or models, the second is entirely experimental and related to real problem solving aspect of engineering, medicine, environment, industry, economy, etc…. the two need theoretical approaches basis, as well as software tools or/and development. If some research teams are not experienced in software development, many employ developers. And several developers become the chief of their research teams and projects with successful findings. A developer could be an excellent researcher when involved in research, as well he could be excellent engineer/developer when involved in industry.. the question is how to attract developers in a research career when industrial and entrepreneurship career is more advantageous and valuable !!! Another problem is about the publication competition or race that limit the researchers time about the search of the applicability of their research, focusing their interest on fast computer simulations aspects.
I see different perspectives into the question - it follows:
- Project timeline and budget versus productivity as research will required some up-front work before we get the results faster
- Intention to solve the problem would be tactically faster with the existing knowledge (or expertise) versus exploring new things (upfront investment)
- Research oriented mindset – not necessary all developers are coming from research mindset, but exploring by putting things through just practices and just try it
- It won't be always easy to transition a research papers into the practices
Why developers wanna to use the best research on software development?
why we need to go with software dependent ?.... :)
I absolutely agree with Ian Sommerville, the academic system is putrid in most cases, since it prioritize the number of papers (sometimes also quality) over any other thing such as soundness or impact of the contributions, technology transfer, etc. As a result, Academy meke it difficult to the Industry, which has other variables like time-to-market, closing budgets, etc.
It is very true that the software developers don't use best research on software development in general, However the gap between a researsher and a software develped is widening with due course of time. The gap between academicians and software developers is increasing with every passing day. The need of the hour is to have software developer, researcher and academician working in tandem for the betterment of the Industry which will help the Software Industry to adopt best research practices on software development .
I totally agree. From my experience I have seen this much highly depends on the practices used by the company the developer works for or the environment to which the software will be deployed. I personally think that developers use best practices and not best research. This is about technology proven to work in solving real problems. Moreover, most software projects tend to minimize risks of not using proven technology. I believe that research applied in solving real / practical software dillemas will be what makes developers use that research because only then they are able to maximize the added value and quality of their software products.
I really sad about this gap in computer science. In other fields like healthcare, practitioners and researchers works together, even so, they are the same people, who work in hospital, research centres with real-life patients. From my experience, when you develop a new practice, technique or whatever the most difficult part is to find companies that held out their hands and provide you with real-life systems, teams with which conduct empirical evaluations. Of course, when you try to do that with a company, they want evidences that your new approach is tried and tested previously.
Because there has always been a big gap between research and in practice software development. We were always slow and poor in developing new tools based on latest research results. Or tools that can integrate new research findings and can modify accordingly.
Sad, really sad to observe that the software development world is soooooooooo slow in becoming a really engineering discipline as prompted by pioneers like Prof. Bauer (who "invented" the phrase Software Engineering almost fifty years ago), Prof. Parnas (in many talks and papers, never becoming tired to call for more responsibility and liability), and many more. The supposed gap between research and industry is not necessary, it is an myth uphold by some stakeholders (who may have their reasons or not). Sad, really sad. Where in industry do we have this silly belief in two separate worlds which should be left alone? Today I read again the reports about the annual economical losses caused by unqualified software development (many reasons). It's a horror story. Who are the winners? Who are the loosers?
I wonder if it is possible, no perish the thought. Alright then I'll say it. Could it be that we (practicing software engineers ) might just know more about software engineering than our academic colleagues? Of course not, I nearly hear you saying, and therein lies the problem.
The initial question "Why don't developers use the best research on software development?" is indeed generalizing in an unfair way. Unfair with respect to individual software developers. As a generalization it looks at the collective, the community, the professional body of "software developers" and implicitly claims, that "no [software] developers [is using] the best research on software development.". Then it asks "why?".
Of course, this is highly suggestive and that's why I said it is unfair. The explanation to the question doesn't provide any specific evidence (recent data or reports or books or conferences), by which this claim might be corroborated (I intentionally avoid the term 'proved' here, because that would still be much stronger). Thus, for me, this implicit claim was and is still a generalization in the form of a hypothesis. And we at RG may discuss any thoughts about it.
Indeed, it is easy enough to find exceptions to the rule, pardon the implicit claim. Especially here at ResearchGate, where you find a colourful mixture of theory-oriented and practice-oriented researchers, and a lot in-betweens. And of course some software developers (surely not all) have got a university degree and so are implicitly, or rather tacitly, applying all sorts of best research from their university days, and - hopefully, in the context of life-long-learning - also more recent best research results.
Also, everybody knows famous counter-examples like Edgard Dijkstra or Donald Knuth, to name just a few pioneers, who, by necessity (they had no chance, the poor) were both excellent programmers and excellent researchers and teachers (one of my colleagues long time ago was a disciple of Dijkstra).
So, is the initial question (totally) misplaced or misleading?
I ended my last contribution with the question "...is the initial question (totally) misplaced or misleading?" The simple answer is: "No, if you interprete it on the collective level (see my previous comment) and add all available relevant evidence to support the implicit claim (as a generalized statement, which would explain some unwanted implications).
What are those unwanted implications? I guess, that might be the billions of economic losses incured by software productions and products that haven't been successful in one or more ways. That's exatly what I alluded to in my yesterday's contribution.
And yes, there is a lot of evidence coming together over the years, which testify, that we (again as a community) are unbelievably lenient with respect to software accidents (cf. the book 'Computer Related Risks' of Peter Neumann and his 'Internet Risks Forum' over more than 20 years). Now this need not be so dramatic or exceptional, if the total revenues on the positive side of the balance would far exceed those losses.
For example, we (as a society) accept as unavoidable that year after year thousands of people get injured or killed in traffic, because we feel that we can't live without it and thus we can't afford stricter rules and/or more expensive precautions to get this number down, e.g. to zero, because a human's life is estimated to be worth infinity (or not?) Well, actually, famous Sebastian Thrun, director of the Stanford Artificial Intelligence Laboratory, and many others are working on a solution, but ... it will be very expensive and for some people not acceptable; I refrain from further commenting on this :-((
http://en.wikipedia.org/wiki/Productivity_paradox
Unfortunately, in the IT-business it is worse than the situation I explained in my previous note: high losses, but much higher profits, so a positive ROI.
Among economists and applied IT-researchers there is a famous 20-year old paradox, it is called the Productivity Paradox (cf. link below). In a nutshell, it says that all the investments in IT over the years have not shown any visible or tangible net effect, however you calculate it. One of the books in which it is explained in detail is Paul Strassmann's "The Squandered Computer".
Of course, not everyone (stakeholder?) accepts the conclusions of such research, still going on. On the other hand, I didn't find a *definitive* solution or rejection of the paradox. Only some suggestions, why it looks like a paradox, but isn't really.
Now,for me, the initial question is just a variation on this theme: why do we (still) accept so much loss in the Software Industry, when potential remedies in the form of best research results are available? See many institutions devoted to upgrading the whole business, from requirements engineering over automatic code generation and verification to overall software project management and software maintenance.
Could it be, that the mind-set of many a software developer is still the same as found and criticized by Prof. Bauer in 1968? Could it be, that the 'art of computer programming' is still dominated by the code-and-fix paradigm, only supported by all kinds of nifty sofware work benches? Could it be that effective team work in the software industry is again and again jeopardized by impossible goals and deadlines (cf. The Death March)? Could it be that there are still too many myths about what is possible and what not, and thus in the end about what Software Engineering should be?
Hello all,
I've been pointed to this discussion, so I'll add my snippets. As I haven't read all that has already been written, I might repeat some considerations already written - or rise a shit storm :)
Some facts about me: developer since 1987, mostly in software, but not restricted to that, I'm applying whatever is reasonable and helps to increase productivity. I personally have little concern about stability, maintainability and performance - due to the fact that I have some stock of well-proven approaches and am constantly addressing these issues. (Working mainly in the automotive sector today, these are the daily needs.)
Now some aspects of my point of view:
1. "software research" is a wide field: from high performance computing optimization via database organization and algorithm development to "the software development process" - and beyond.
2. A lot of "software research" does not consider performance issues. The thinking "wait a little while - the hardware will soon be available" is wishful thinking: performance increases cannot cope with the rise in performance demand required by some newer research results.
I can remember a first evaluation of formal code validation by simulation: the software license ran out (after 30 days) - the job not yet done. I've seen such a formal code validation computer - still not sure whether it is now possible to have our complete application validated or whether they are "only" validating parts of it.
2. For my area of application, most of this research does not cover our needs. we need some algorithms and would be happy about every tool easing our daily work. The advertisements are about "managing complexity" where "reducing complexity" is urgently required.
3. We are running repeatedly into problems already addressed: in the last millennium. Problem: colleagues from university know the "latest and greatest" denominations (aka: bla-bla) for items that existed with another name prior their birth, but know little about the tasks to solve. Seems that our daily issues are not in the academic focus. At least not in the focus of teaching.
4. It doesn't help to know about latest research results if you cannot even get a mini-interpreter up and running. (Personal experience!)
5. If you are citing Prof. Bauer, I'll add the Professors Balzert (Helmut and Heide). For me, they were a great inspiration. And the UML users may know about Heide Balzert. But who knows about what her husband wrote ?
6. To some extend the new tools do not help - even if available: I have some examples for inappropriate use of tools intended to be helpful: if the trolls (aka ignorants aka idiots) are using or configuring them: what do you expect to be the outcome ? In this case, the tools are dismissed without cause.
And so on. And so on.
If research and teaching could give me the colleagues and customers with the ABILITY to adopt the new results: I'm already trying to introduce them. With little success.
A last note: I've introduced some of the proven software development paradigms into hardware development. For me personally, this is working near-to-perfect. But what do you expect from the "classic" hardware engineer ? :)
The best research on software development, later, slowly or fastly will be implemented into practise. People including from industry finally will know the benefits and need to implement it in their environment. From scientific papers, they will become tools, software,books and technical reports which read by developers. Today, a lot of researchers work together with developers/software engineers or developers work on research. They do research and implement the research to new product (tools or software).
A lot of big companies have research and development department to find out innovative products and maybe also do research in basic science. On the other hand, some universities work together with industry. So, i think the gap between research and practise is more narrow than several years ago.
A lof of today popular methods and tools in software development such as Database normalization,UML and C++ come from research and used by so many people including in industry.