Nach einer Einführung zum Thema „Künstliche Intelligenz“ und einer Diskussionsrunde über die “Auswirkungen künstlicher Intelligenz auf die Gesellschaft” bei unserer ersten Digitalk-Serie im Frühjahr 2020, sprach Nicole Kirowitz mit Dr. Ramin Hasani der TU Wien über künstliche neuronale Netze.
Das Interview wurde auf Englisch geführt.
Dr. Ramin Hasani
Vienna University of Technology
Ramin Hasani is an artificial intelligence scientist at the Vienna University of Technology (TU Wien), where he completed his PhD studies. Ramin Hasani will be joining the Computer Science and Artificial Intelligence Lab (CSAIL) of MIT as a postdoc associate. His primary research focus is on the development of transparent artificial intelligence systems, robotics and autonomous systems.
(Photo: tedx cluj)
Nicole Kirowitz: Dr. Hasani, in our first two digitalks this year about artificial intelligence we had an introduction into the term “Artificial Intelligence” (“AI”). It was soon clear that we don´t have any exact definition of artificial intelligence as we still struggle about what natural intelligence is. Can you tell us the difference between natural and artificial intelligence?
Dr. Ramin Hasani: Artificial intelligence (AI) refers to any computer algorithm that tries to improve its performance on a given task, without being explicitly programmed. Now this system can solve complex problems by means of knowledge graphs via a set of rules which we call “expert systems”. Or they can automatically learn to solve the tasks directly from observational data, which we call “learning systems”.
Natural learning systems refer to the nervous systems of living being which can exhibit complex behavior and are constructed over evolution of 600 million years of organisms.
Neuroscience has been the primary source of inspiration for the development of artificial learning systems.
Nicole Kirowitz: Dr. Hasani, is it possible for artificial intelligence to replace human intelligence one day?
Dr. Ramin Hasani: It is certainly possible. Even now in certain tasks AI systems are better than humans. One example would be the detection of cancerous parts of organs directly from radiographic images, on which a branch of AI algorithms which are based on a technology called deep neural networks, outperform human doctors.
I believe that artificial intelligence systems which we train to drive cars and mobile robots autonomously are as reliable as humans if not better!
The field of natural language processing (NLP) which involves tasks such as translation, question answering, sentiment analysis, personal assistants, story-telling, broadcasting news and many more, will be fully overtaken not longer than a decade from now by AI systems. So “language” as we know it today would be obsolete and will evolve.
Nicole Kirowitz: You were invited to the TEDX-conference in Vienna in 2018 and TEDx-Cluj 2019, where you spoke about artificial neural networks. Can you explain to us what artificial neural networks are and why your research in artificial neural networks is important for AI development and use?
Dr. Ramin Hasani: Artificial neural networks that loosely mimic how natural brains compute, are a class of learning systems with remarkable ability to solve complex tasks that were not possible to be handled by classical AI algorithms, such as high-dimensional games (Go, Shogi, StarCraft, Dotta), self-driving cars, natural language processing, and a large set of computer vision tasks.
Although deep neural networks show unprecedented performance over these high-dimensional real-world problems, assuring their safety is extremely challenging. This is simply because their computational graphs are so complex that we can think about them as black boxes. And obviously, the use of black box models however high-performing, in safety-critical domains such as medicine, automation and law is a risk sensitive matter. What I mean is that for instance, we cannot let an artificial intelligence judge to identify a person guilty, without knowing how this “artificial judge” came up with its decision. Dealing with such challenges within the machine learning community, happens through branches such as explainable AI, Ethics in AI, fairness and transparency.
My research work was concentrated on making sense of these black boxes and their underlying complexity, with the goal of learning how to produce safer neural network systems. Particularly, I worked on robotics systems and tried to build safer and more transparent neural network agents that are deployed in high-stakes decision making processes.
Nicole Kirowitz: Now you were talking about how neural networks can make AI-use safer. There are a lot of good things for which artificial intelligence could be used, but we also have to deal with risks and fears of AI. Just think about the corona app which is used to observe the population´s behavior. Where do you think it is most probable to use artificial intelligence in the near future? Which are the risks of AI use?
Dr. Ramin Hasani: Safe AI (being the intelligence agent so that we can understand its decision-making process), can undoubtedly help us understand the world around us better, and as a result, improve our daily life. Intelligent agents in the near future will be humans’ personal assistants, teachers, drivers, online shopping carriers, medical recommenders, translators, fashion designers and many more.
Like any other technology that was introduced to humans, there are shortcomings as well. The technology, and in this case, AI, is not a threat, humans are. As long as we scientists commit to build “AI for Good” we are good to go.
Nicole Kirowitz: How can AI-research help neurosciences?
Dr. Ramin Hasani: As I mentioned before, learning systems have a strong ability to mine information from data to solve problems. At every scale of neuroscience research such as developmental neuroscience, neurogenetics, molecular and cellular neuroscience, behavioral and cognitive neuroscience, clinical neuroscience, neurophysiology, and sensory neuroscience, artificial intelligence can effectively help make sense of collected data.
Nicole Kirowitz: Do you think that we can build a human-like AI which will be self-aware? As we already try to imitate the human brain with artificial neural networks, isn’t it legitimate to assume that artificial intelligence could replace us humans in the future?
Dr. Ramin Hasani: You are talking about consciousness. It is a fascinating question which we have not yet fully understood even about ourselves. Although we have studied Consciousness for centuries, our best conclusion so far on the topic is that: “it exists” and nothing more. So to keep my answer short, my best guess would be that once we can find a unifying definition and origin for it in natural learning systems we can wonder about conscious algorithms and self-aware AI systems.
Oh and in the future, I most certainly assure that humans and machines will co-exist.
Nicole Kirowitz: Thank you very much, Dr. Hasani for this interesting and insightful interview. I have a last question for you. What inspires you the most about artificial intelligence, especially about artificial neural networks?
Dr. Ramin Hasani: Thanks for having me. Our current version of artificial intelligence systems is still too dumb to be inspiring! I believe that to make “insightful” and more general AI systems, there has to be tremendously more interdisciplinary scientific research work coming together. AI can become AI only if mathematicians, computer scientists, physicists, neuroscientists, ethicists, philosophers, sociologists and psychologists actively work together.
Nicole Kirowitz: Thank you again very much for this interview. I wish you all the best.
Feedback und Ausblick
Wir freuen uns über Ihre Meinung zum Thema “Künstliche Intelligenz”. Nach Ihrer Anmeldung können Sie Kommentare und Fragen hinterlassen.
Wir freuen uns ebenso über Ihre Rückmeldungen, Anregungen und Fragen auf unseren Social Media-Kanälen:
Alternativ können Sie uns auch gerne eine E-Mail schreiben an: firstname.lastname@example.org
Haben Sie schon unseren Newsletter abonniert?
Letzte Artikel von Nicole Kirowitz (Alle anzeigen)
- Nachlese: DigiTalk New Work 3 – Homeoffice – Gesetz: War es das? - 26. März 2021
- Nachlese DigiTalk Gesellschaftliche Herausforderungen in der Arbeitswelt durch die digitale Transformation“ - 9. März 2021
- Nachlese DigiTalk NEW WORK 1 – Wie verändert die digitale Transformation unsere Arbeitswelt? - 29. Januar 2021