I am not sure that you are approaching your topic objectively and comprehensibly. It is not enough just to read some important and recent essays. Can you give your criteria for this? As this is a research-based projekt, a different, reflective approach is usually recommended. The research field of Human-Computer Interaction (HCI) is extremely complex and interdisciplinary.
First of all, you should determine the focus of your studies and consider what you would like to concentrate on scientifically. It is important to systematically record and present the current state of research on this basis. Only then can you formulate a precise research question, define the objectives of your study, determine the working assumptions, explain the methodological procedure, work out a research design and define detailed steps for its implementation.
Below you will find some recommendations for relevant journals:
- Foundations and Trends in Machine Learning
- Nature Machine Intelligenceopen access
- IEEE Transactions on Cybernetics
- IEEE Transactions on Systems, Man, and Cybernetics: Systems
- Computers in Human Behavior
- Cyberpsychology, Behavior, and Social Networking
- International Journal of Intelligent Systems
- International Journal of Human Computer Studies
- User Modeling and User-Adapted Interaction
- Human-Computer Interaction
- Technology, Knowledge and Learning
- International Journal of Computer-Supported Collaborative Learning
I would use use modern product and identify the pain points. in ww2 places planes crashed from human error because sometimes up meant down or zoom in is wheel towards you or buttons in another place. or drag and drop means replace, not merge. ( that destroyed a music library on my friends mac forever , not undoable, because i was used to windows, for an example.
This here and now suggestion of how it should be done, because one company should take the lead and just do it. The user can use the old UI or the voice. and the voice barely intrudes on the space available on the UI visualization. Example moving the start menu to the middle makes the likely hood of finding it, iby feel especially, a very wrong move but that was done. No-one liked that, its just physics. So we have not progressed buch since the mouse and windows 98. or the first mac .
Everyone has an opinion on UI but i feel i found one that might quickly result in consensus after 30 years in the industry.
Sources can be UI in the Github.. and 3rd party UI like "Start is Back."
This is an expert on UI and is very academic and inspired by idea https://github.com/omlins/JustSayIt.jl
you can go into AR and other , 3d maybe but that is something I feel is not for the mainstream but you could look for research on that.. I industry noone does it, they hire product designers and usually get a lot of meetings and user complaints.
it helps i have an opinion on how to use natural language and no keyboard and mouse to control and query a PC and run tools and other application, fist by looking and speaking then later just by ear , without having to press to send, read instructions or use exact commands like open not launch app or say the full name. The stylus and touch might be continued but the set up is a pc with a cardioid mic / headset, adn a large TV or projector, and build over the current UI.
The problems in UI are just eye and click count and # clicks and stress and eye movements. its discovery vs the random access of commands line but without the having to remember arcane /s @2 or switches and such. in completion the problems are completing versus code generation, resolving ambiguity , and safety on sensitive operations that can lose data or cannot be undone.
I hope it helps, its my opinion but like an airplane or car, a unified UI that is very natural, you tell what you see and want and it figures it out. the exact prediction that matches with the context... LLMs does this very well and its simply add the context and reduce it to hands free. This is not for disabled people only .. its for people who can heart and speak. So all that was mentioned could be considered for other contexts but min is motivated by the clarity of radio comms and using the Nato Alphabet, volume levels and building knowledge of each users with some safety: here is the breakdown. I gave it to msft but there are not putting pieces together. its not useable but its amazing in its offline dictation voice to text. Hope this helps and if you can improve on it or disagree let me know became I imagined it would be what anyone would say and i've had people try it and they say about the same thing, but sadly the system its not using context so it fails. its a WIP and i hope they figure it out.
>>>
I believe I have the ideal UI for a standard operating system with applications on the desktop. I will describe the full design based on the features in Windows without citing anything because the features already exist. They just have not been integrated together.
The LLM itself is now my favourite natural interface. Just talk to your computer. Describe problems and let it code and test. This is meant to be built over the existing visual menu UI in Windows and the command line in Linux, and the API testing surfaces that are public in each application.
For instance, if I wanted to count the circles in the current drawing in an application, I would ask the computer, "How many circles are in the airplane2 file that is opened?" It would then speak the answer.
From what I see, all the hardest parts are done. It took more than 20 years, and I have been shopping it around. If you agree or find in your research another way that is practical, a cardioid microphone makes all the difference.
The mic array on a laptop does not work, and so I now use Voice Access in Windows 11 quite a lot. Ideally, there will be low latency hot word dialog with LLMs and Domain specific language, and visual UI indexing, via Voice to Text.
The menu context can be used to weight the guesses, and if there are no files open the top level menu items and their synonyms can be used to guess voice to text. A safety mechanism can also be designed, such as an application naming hot word, a safe way to send (press enter), or press buttons.
This safety mechanism would, for example, require some low confidence or replay to all on a corporate email that contains expletives. It would do anything that a good human assistant would do.
The ultimate goal is unified UI that indexes visual UI like the windows StartisBack menu. It would function without being told, by saying AppNAME, or a nickname, Open the last project, Build release, and run. This would guess the EXE project.
It will do what ChatGPT4 can already do in terms of transformation, homology and pattern matching. This exiting mess of UI can be less as is and you can just say, "What is my Wi-Fi password?" It might ask for a code again. From now on, though, I will call you "Mother," as in the 1979 film Alien
Another example is the scene in Blade runner ( 1st film)2019 .https://youtu.be/dswKyUUhKMI
and other scifi films. Star trek Next generation, these sci fi can now be done with todays technology. in a few months i would guess. Hope it helps .
Prototype brain–computer interfaces have helped paralysed people to speak in their own voices, type at unprecedented speeds and walk smoothly. And companies are working on wearable brain-reading products that aim to help users to control their mental state or to interact with computers. Beyond the hype, researchers are well-aware of the risks: from the spectre of big-brother brain surveillance to the commodification of “the sanctuary of our identity”...
List of research references and related references are given at the end of article.
I've been following the insightful discussion on the latest research in Human-Computer Interaction with great interest. The diversity of perspectives and depth of knowledge here is truly impressive. I'd like to contribute to this conversation by introducing a novel concept in HCI that I've been developing, as a counterpart of traditional current technologies which I have called (Third-Person technologies).
I noticed that technologies are usually if not always developed from a Third-Person perspective as a shadow shaped by the objective nature of the Scientific Method.
I have switched the perspective for technology design into what I called: "Subjective Technology", "Subjective Artificial Intelligence", which I also called "0-Input Technology" as it is one of the properties the technology has.
Subjective Technology ( represents a paradigm shift in HCI, focusing on minimizing user input to virtually zero through a concept I term "0-Input Technology." This approach leverages a unique combination of VirtualGlass ,KnowledgeHooks, Subjective Virtual BodyParts to create an intuitive, user-centric experience that significantly reduces the cognitive load and physical interaction required from the user.
Key aspects of Subjective Technology that might intrigue you include:
0-Input Technology: A revolutionary approach where user input is minimized through predictive and adaptive algorithms, making technology more intuitive and less intrusive.
VirtualGlass and KnowledgeHooks: These are core components of Subjective Technology, offering a new way of interacting with digital environments. VirtualGlass acts as an interface, while KnowledgeHooks serve as context-aware triggers for actions, automating processes based on user behavior and preferences.
Interdisciplinary Application: This technology has potential applications across various fields accessibility, healthcare, education, economy , offering new ways to interact with digital systems and the way people interact with each others.
I believe this could be a new direction in HCI: Technology design from a subjective perspective challenges traditional interaction paradigms by introducing a more organic and seamless interaction between humans and computers, akin to natural human thought processes.
Subjective Thermo-Currency: It could be one of the most interesting properties I have thought: The Subjective perspective makes possible to build what I have called "Subjective Thermo-Currency" that is able to obsolete the money as a concept.
I believe that Subjective Technology could add a valuable dimension to the ongoing research in HCI. I have detailed this concept the paper, which I've uploaded on ResearchGate few days ago.
I would be honored if you could take a moment to review it and share your thoughts. Your insights would be invaluable in refining and advancing this technology further.
You can find the paper here:
Deleted research itemThe research item mentioned here has been deleted
I look forward to your feedback and hope to spark a fruitful discussion on the potential of Subjective AI in reshaping our interaction with digital systems.
A computer that combines laboratory-grown human brain cells with conventional circuits can complete tasks such as voice recognition. Researchers trained the system on 240 recordings of eight people speaking. A machine-learning algorithm decoded the mini brain’s neural activity pattern to identify voices, with 78% accuracy. The technology could one day be integrated into artificial-intelligence systems or used to study neurological disorders...