visit
This video is both an introduction to the recent paper Thinking Fast and Slow in AI by Francesca Rossi and her team at IBM, and to Luis Lamb's most recent paper Neurosymbolic AI: the 3rd Wave of AI.
Both these papers are drawing inspiration from human capabilities to build a future generation of artificial intelligence that would be more general and trustworthy.Then, there are 10 important questions for the AI community to consider in their research. Enjoy the video!
Chapters:
References:
Follow me for more AI content:
(Note: this transcript is auto-generated by YouTube and may not be 100% accurate)
00:00okay let me say let me tell you about my00:03experience you know in dealing with00:05trust and00:06ai ethics in general and also what i00:09think are some00:11main points so some of them are00:12technical and some of them are not00:14in really achieving this ecosystem of00:17trust around the ai so this is the00:19overall picture that i think that00:21many of the previous panelists put00:23forward you know we want the future00:25i would say of and with ai because of00:28course i don't think as ai00:30uh at ai as just autonomous systems but00:33also systems that work with us00:34so it's not just the future of the eye00:36for me but also with ai00:38uh and it has all these uh desirable00:41properties of course trustworthiness is00:44one but of course general collaborative00:46and00:46as already mentioned for gdp three and00:48language very huge launch and matter00:50also sustainably00:51uh computationally but how to focus on00:55this00:55third panel how do we build an ecosystem00:58of trust00:58and i and i talk about an ecosystem of01:01trust because01:02it has many dimensions just like01:04trusting other people as many dimensions01:07so of course we want ai system to be01:09accurate01:10and that's but beyond accuracy we really01:13want01:14a lot of desirable properties one of01:16them i called it and01:18some other people call it value01:19alignment which is around fairness you01:22know what how do we want these01:25machines to behave to behave according01:27to some values that we care about01:28one of them of course is fairness so we01:31want the bias to be01:32identified removed and so on but01:35it may have maybe other values that are01:37beyond fairness01:38robustness also you know01:40generalizability beyond01:42you know some data distribution and01:44explainability explainability is very01:46important especially in the context of01:48machines that work together with human01:50beings01:52now but differently from what we would01:55expect in building trust with another01:56human being01:58here we are not in in the presence of02:01another human being we are the presence02:03of ai systems that are created by human02:05beings02:06so just like margaret and others have02:08said we want02:09something also from those that create02:12that ai02:12system from the developers from the02:14deployers for those that use the ai02:16and one of the things that i think02:18margaret pointed out very clearly and02:21we have a very similar approach is we02:23want transparency we want transparency02:25about02:26the decisions that have been made during02:28the ai pipeline02:30whether their decision about the02:31training data or other decision and02:34in that very nice visual way margaret02:37showed that bias can be injected in many02:39different uh02:40places in the ai pipeline uh we don't02:42have the concept of a model card but a02:44very similar one called the ai faction02:46and in fact we work together with02:48margaret and others also within the02:50partnership and ai02:51to compare and learn from each other02:53these different ways to achieve02:55uh transparency the second point that i02:58want to make is that02:59of course we want these developers also03:02to work according to some ai03:04ethics and guidelines and principles but03:07principles are just the first step03:10in in in a corporate place where03:13ai is being produced and deployed so it03:16really needs a lot of multi-stakeholder03:18consultation03:19education training and as margaret03:22already mentioned diverse teams03:24you know to bring all many different03:26backgrounds it needs a lot of technical03:28tools03:29for example to detect mitigate bias to03:31generate explanation and so on03:33it needs a lot of work in helping03:36developers understand03:37how to change the way they're doing03:39things how to make it as easy as03:41possible to adopt a03:43new methodology and how to build an03:45overall governance03:47in a company within you know that is a03:50kind of an03:50umbrella over what developers are doing03:53what the business units are doing and so03:55on so it's really a process03:57and that's why i put this picture of03:58trust with all the cranes because it's a04:00process to build trust in ai04:04so the last point that i want to make is04:06that uh04:07for all these properties that we want in04:09the ai systems in order to be able to04:12trust them04:12unfortunately current ayai is not there04:15yet04:16these are the reasons why francisca04:18rossi and her team at ibm04:20published this paper proposing a04:22research direction to advance ai04:24drawing inspiration from cognitive04:26theories of human decision making04:29where the premise is if we gain insights04:32into human capabilities that are still04:34lacking in04:35ai such as adaptability robustness04:38abstraction generalizability common04:41sense and causal reasoning04:43we may obtain similar capabilities as we04:46have04:46in an ai system nobody knows yet what04:49will be the future of ai04:51will it be neural networks or do we need04:53to integrate machine learning with04:55symbolic and04:56logic based ai techniques the latest04:59is similar to the neurosymbolic learning05:01systems05:02which integrate two fundamental05:04phenomena of intelligent behavior05:06reasoning and the ability to learn from05:09experience05:10they argue that a better comprehension05:12of how humans have05:14and have evolved to obtain these05:16advanced capabilities05:18can inspire innovative ways to imbue ai05:21systems with these competencies05:23but nobody will be better placed than05:25lewis lamb05:26himself to explain this learning system05:28shown in his recent paper05:30neurosymbolic ai the third wave05:34what we want to do here is exactly this05:36convergence because one of the key05:38questions is to identify05:40the building blocks of ai and how to05:44make05:44a.i more trustworthy ai05:48explainable but not only explainable05:50interpretable as well05:52so in order to make ai interpretable05:55sound and to use the right models05:59right computational models so that one06:01can explain what's going on in ai06:03we need better representations we need06:06models that are sound06:07and soundness and the results that come06:10from logic06:12the correctness results and all of that06:14can benefit06:15of course the great results you are06:17having on deep learning so06:19our work corroborates this point that uh06:22gary marcus made and also that danny06:25kanema made06:26at tripoli i that system one i mean the06:29fast system one that's associated with06:32concepts like deep learning06:34certainly knows language as daniel06:36kahneman said and system06:372 which is more reflective certainly06:40does involve06:41certain manipulation of symbols so this06:44analogy of system 1 and 206:46leads us to build the ideas that are06:50the inspiration the inspiration that06:52gary brought in his book the algebraic06:54the algebraic mind and also that we06:56formalized in several06:58neural symbolic systems since the early07:012000s07:02and some of them several of them07:04temporal reasoning model reasoning07:06reason about knowledge are formalized in07:08this book and07:09of course we have been evolving this07:11concept so that we one can deal with07:13combinatory explosion07:14and several other symbolic problems07:17within a neurosymbolic framework07:20and so the approach that we have been07:22defending over the years07:24is that we need a foundational approach07:26for neurosymbolic computing07:28neurosymbolic ai07:29that's based both on logical07:31formalization and we have francesca here07:34judy pearl that have been that have been07:36outstanding results on symbolic ai07:39and machine learning and we use logic07:42and knowledge representation to07:43represent the reasoning process07:45that is integrated with machine learning07:47systems07:48so that we can also effective07:50effectively perform07:52neural learning using deep learning07:54machinery so our approach has been07:56tested in training07:58assessment simulators by tno which is a08:01dutch subsidiary of the government it08:03has been applied in robotics and ai08:06and several other applications but what08:08we offer here08:09is a sound way including some formal08:12results08:13that our neurosymbolic systems in order08:16so that we can have more effective and08:18more trustful08:19ai we need to have models08:23interpretive models that are based on08:25sound logical models08:27and in this way we can explain what the08:29neural learning process is doing08:32at the same way that we can prove that08:35the results that we obtain via machine08:38learning08:39can even be related to the formal08:41results that one typically expects from08:43symbolic logic for instance here08:46in a system that we call connections08:47connectionist modal logic08:49which was by the way published in the08:51same issue of neuro computation that08:54jeff hinton published one of his08:56inflation paper08:57on deep belief nets we proved that model09:01and temporal logic programs09:02can be computed soundly in neural09:06network models so in this way what we09:08provide09:09in a way is a way of providing neural09:12networks09:12as a learning system which can also09:15learn to compute09:16in a deep way the evolution of knowledge09:19in time09:20and this is what we explained in several09:23of our papers09:24and also in recent work that we09:26published gary in09:27um in tripoli 201809:31and now each guy 2020 where we present a09:34survey paper09:36so the the the final message here09:39is that there have been some09:41developments including09:43uh the ai debate the great ai debate09:46between banjo and09:47gary marcus last year which we saw also09:50at triple i 2020 that we need more09:52convergency09:53towards building more effective ai09:56systems09:57and ai systems that most people can09:59trust since ai10:00is becoming a lingua franca for science10:03these days10:04neurosymbolic is basically another type10:06of ai system that is trying to use a10:09well-funded knowledge representation10:11and reasoning it integrates neural10:13network-based learning10:15with symbolic knowledge representation10:17and logical reasoning10:18with the goal of creating an ai system10:20that is both interpretable10:22and trustworthy this is where francesca10:25rossi's work comes into play10:27with their purpose called thinking fast10:29and slow in ai10:30as the name suggests they focus on10:32daniel kahneman's theory regarding the10:35two systems10:36explained in his book thinking fast and10:38slow10:39an attempt to connect them into a10:41unified theory10:43with the aim to identify some of the10:45roots of the desired human capabilities10:48here is a quick extract taken from the10:50ai debate 210:51organized by montreal ai where daniel10:54cantman himself clearly explained these10:56two systems10:58and their link with artificial10:59intelligence11:01i seem to be identified with the idea of11:04two systems system one and system two11:06although they're not my idea but i did11:08write a book11:09that described them and11:12as quite a few of you sure know11:16we talk about the contrast between one11:18system that works fast11:20another that works slow uh11:23but the main difference between system11:25one and system two as i described them11:27was that system one is something that11:30happens to you11:32you are not active the thought that11:35the words come to you the ideas come to11:38you the emotions come to you they happen11:40to you11:40you do not do them and the essential11:43distinction that i was drawing11:45between the two systems was really that11:47one11:48something that happens to you and11:50something that you do11:52high level skills in my description of11:55of things11:56were absolutely in system one anything11:58that they can do automatically12:00anything that happens associatively is12:03in system one12:06another distinction between system one12:08and system two12:09as psychologists see them in that12:12operation the system one tend to be12:14parallel12:14operations or system two tend to be12:16serial12:18so it's true that anything12:21any activity that we would describe as12:23non-symbolic12:25i think does belong to system one12:28that system one i think cannot be12:30described12:31as a non-symbolic system for one thing12:35uh it's it's much too complicated and12:39rich for that12:40it knows language for one thing12:42intuitive thoughts12:44are in language uh the most interesting12:47component12:48of system one the basic component as i12:51conceive of that12:52notion is that it holds a representation12:54of the world12:56and and the representation that actually12:59allows something that resembles the13:01simulation of the world13:03as i describe it we we live13:06with that representation of the world13:08and most of the time13:10we are in what i call the valley of the13:12normal13:13there are events that we positively13:15expect13:16there are events that surprise us but13:19most of what happens to us13:21neither surprises us nor is it expected13:24what i'm going to do13:25to say next will not surprise you but13:28you didn't actually expect it13:30so there is that model that compares13:33that13:34accepts many many13:38events as normal continuations of what13:41happens13:41but it rejects some and it distinguishes13:45what is surprising from what is normal13:51that's very difficult to describe in13:54terms of symbolic or non-symbolic13:56certainly what happens is a lot of13:58counter factual thinking14:00is in fact system one thinking because14:04surprise is something that happens14:07automatically14:08you're surprised when something that14:10happens14:11is not normal is not expected14:14and that forces common sense14:17and causality to be in system one and14:20not in system two14:22in short can man explains that humans14:25decisions14:26are guided by two main kinds of14:28capabilities14:29or systems which he referred to as14:32system 1 and system 2.14:34the first provides us tools for14:36intuitive fast and unconscious14:38decisions which could be viewed as14:41thinking14:41fast while the second system handles14:44complex situations14:46where we need rational thinking and14:48logic to make a decision14:50here viewed as thinking slow14:53if we come back to the thinking fast and14:55slow in ai paper14:56scratch frachanska russi and her team14:58argue that we can make a very loose15:00comparison15:01between these two systems one and two15:04and the two main lines of work in ai15:06which are machine learning and symbolic15:09logic reasoning15:10or rather data-driven versus knowledge15:13driven ai15:14systems where the comparison between15:17kanman's15:18system one and machine learning is that15:20both seem to be able to build15:22models from sensory data such as15:25seeing and reading where both system 115:28and machine learning produce15:29possibly imprecise and biased results15:32indeed what we call deep learning is15:35actually not15:36deep enough to be explainable similarly15:38to system one15:39however the main difference is that15:41current machine learning algorithms15:43lack basic notions of causality and15:46common sense15:47reasoning compared to our system one15:50we can also see a comparison between the15:52system 2 and ai techniques15:54based on logic search optimization and15:57planning15:58techniques that are not using deep16:00learning rather employing explicit16:02knowledge16:03symbols and high level concepts to make16:05decisions16:06this is the similarity highlighted16:08between the humans decision making16:10system16:11and current artificial intelligence16:13systems i want to remind you that as16:15they state16:16the goal of this paper is mainly to16:18stimulate the ai research community to16:20define16:21try and evaluate new methodologies16:24frameworks and evaluation metrics in the16:27spirit of achieving a better16:29understanding16:30of both human and machine intelligence16:33they intend to do that by asking the ai16:36community to study16:3710 important questions and try to find16:40appropriate answers or at least16:42think about these questions here i will16:44only quickly list16:45these 10 important questions to be16:47considered in future research16:50but feel free to read their paper for16:52much more information regarding these16:54questions16:54and discuss it in the comments below so16:57here it goes16:58should we clearly identify the ai system17:011 and system 2 capabilities17:04is the sequentiality of system 2 a bug17:06or a feature17:07should we carry it over to machines or17:09should we exploit parallel threads17:12performing system 2 reasoning will this17:14together with the greater computing17:16power of machines compared to humans17:19compensate the lack of other17:20capabilities in ai17:23what are the metrics to evaluate the17:25quality of a hybrid system 1 system 217:28ai system should these matrix be17:31different17:32for different tasks and combination17:34approaches17:35how do we define ai's introspection in17:38terms of eye consciousness and17:40m consciousness how do we model the17:43governance of system 1 and17:45system 2 in an ai when do we switch or17:48combine them17:49which factors trigger the switch how can17:53we leverage a model based on system 117:55and system 2 in ai to understand and17:58reason17:58in complex environments when we have18:01competing priorities18:03which capabilities are needed to perform18:05various forms of moral judging and18:08decision making18:09how do we model and deploy possibly18:12conflicting normative ethical theories18:14in ai18:15are the various ethics theories tied to18:18either system 1 or system 218:21how do we know what to forget from the18:23input data during the abstraction step18:26should we keep knowledge at various18:28levels of abstraction18:30or just raw data and fully explicit18:32high-level knowledge18:34in a multi-agent view of several ai18:37systems communicating and learning from18:39each other18:40how to exploit adapt current results on18:43epistemic reasoning18:44and planning to build learn models of18:47the world and of others18:49and finally what architectural choices18:52best support the above vision of the18:55future of ai18:58feel free to discuss any of these18:59questions in the comments below19:01i would love to have your take on these19:03questions and debate over them19:05i definitely invite you to read the19:07thinking fast and slow in ai paper19:10as well as daniel canman's book thinking19:13fast and slow if you'd like to have more19:15information about this19:17theory if this subject interests you i19:20would also strongly recommend you to19:22follow the research of yeshua benjo19:24addressing consciousness priors and a19:27huge thanks to montreal ai for19:29organizing19:29this ai debate 2 providing a lot of19:32valuable information19:33for the ai community all the documents19:36discussed19:37in this video are linked in the19:38description below please19:40leave a like if you went this far in the19:42video and since there are over 9019:44of you guys watching that are not19:46subscribed yet consider subscribing to19:48the channel to not miss any further news19:50clearly explained19:52thank you for watching
Join Our Discord channel, Learn AI Together:
►
Song credit: