第一吃瓜网

also how the decisions come about We in the USA are not yet holding this debate to this extent and that will be come a problem for civil society I think Europe currently needs to decide whether it is going to become a player in AI and on what terms it will play A major debate at present concerns the distortions of decisions taken by machine learning systems the so called bias This leads for exam ple to racist discrimination Where does this debate originate from There are many examples of discrim inatory decisions made by algorithmic systems Women in the USA were not shown highly paid jobs in a Google advertisement U S software rated the risk of relapse of black prisoners higher in the criminal justice system In healthcare people suddenly no longer received their treatment because an algorithmic system decid ed that they no longer needed it without a human being taking part in the decision These issues have been present for some years already with simple AI systems And when it comes to very advanced AI systems we have problems with the traceability of decisions Are companies and science respond ing appropriately I see a worrying pattern emerging the quick fix The idea that we can simply bring a mathematical idea a formula for fairness into technical systems That will fail Because our AI systems are trained using data from the past a past in which bias and discrimination are deeply embedded We need to think broadly socio technically because this is a much larger debate that must not only be held in the computer science laborato ries We need a much more interdisci plinary approach We need to inte grate politicians as well as sociologists political scientists philosophers historians The question right now is how do we want to live and how should our technical systems support this This is the biggest challenge in the next years Will there ever be a world without these biases without discrimination In the past people have produced social change by rising up against a system that they thought was unjust You can do that when you can see a system and demand a different way of living In complex AI systems how ever we are often not even aware what systems are at work and even when we are their decisions are usually hard to understand This makes it extremely difficult for those affected to defend themselves We must therefore insist that these systems are accountable and transparent But we also know that there is no quick technical solution We must accept that these systems will always pro duce forms of discrimination We will have to decide in which areas this is acceptable Can we know if there is bias or not before implementing AI systems At the AI Now Institute the first ever university research institute dedicat ed to understanding the social implications of AI we are researching how a system can be tested early and systematically so that the extent to which it discriminates against different groups can be understood from the outset and over time We have also developed a framework for Algorithmic Impact Assessment that How do we teach machines not to discriminate Machines are constantly learning based on data from the past and they can use it to cement existing unequal circumstances For example when a system learns from the Internet which careers men and women per form it could come to the conclusion that women shouldn t be presented with any job openings for doctors or computer scientists and men shouldn t be presented with any jobs in nursing Researchers are currently working hard to rid AI of these kinds of weaknesses Never theless because AI is so perfect at recogniz ing patterns in data it s not enough to just delete factors like gender or skin color from the data AI simply calculates them from other contexts like names or addresses social environments and much more Added to that are further weaknesses of machine learning especially deep learning based on deep neural networks the systems cannot explain how they came to a decision In the event of complex decisions this means that people have to just trust them But given the mishaps that have happened in the past this isn t a good solution In research there are initial promising approaches involving an explainable AI that at least partially reveals the factors on which it s basing a decision so people can check it for plausibility and verify whether the decision was made on the basis of our value system AI systems are trained using data from the past in which bias and discrimination are deeply embedded Interview36 Robert Bosch 第一吃瓜网

Vorschau RBS 2018-03 EN Seite 36
Hinweis: Dies ist eine maschinenlesbare No-Flash Ansicht.
Klicken Sie hier um zur Online-Version zu gelangen.