A closer look at the AI Incident Database of machine learning failures

The failures of artificial intelligent systems have become a recurring theme in technology news. Credit scoring algorithms that discriminate against women. Computer vision systems that misclassify dark-skinned people. Recommendation systems that promote violent content. Trending algorithms that amplify fake news. Most complex software systems fail at some point and need to be updated regularly. We have procedures and tools that help us find and fix these errors. But current AI systems, mostly dominated by machine learning algorithms, are different from traditional software. We are still exploring the implications of applying them to different applications, and protecting them against failure needs new…

This story continues at The Next Web

Related Articles

Your company’s AI strategy is failing — here are 3 reasons why

Most companies are struggling to develop working artificial intelligence strategies, according to a new survey by cloud services provider Rackspace Technology. The survey, which includes 1,870 organizations in a variety of industries, including manufacturing, finance, retail, government, and healthcare, shows that only 20% of companies have mature AI/machine learning initiatives. The rest are still trying to figure out how to make it work. There’s no questioning the promises of machine learning in nearly every sector. Lower costs, improved precision, better customer experience, and new features are some of the benefits of applying machine learning models to real-world applications. But machine learning is… This story continues at The Next Web

What is semi-supervised machine learning?

Machine learning has proven to be very efficient at classifying images and other unstructured data, a task that is very difficult to handle with classic rule-based software. But before machine learning models can perform classification tasks, they need to be trained on a lot of annotated examples. Data annotation is a slow and manual process that requires humans to review training examples one by one and giving them their right labels. In fact, data annotation is such a vital part of machine learning that the growing popularity of the technology has given rise to a huge market for labeled data. From Amazon’s Mechanical Turk… This story continues at The Next Web

Google unveils privacy-friendly AI cookie killer

Third-party cookies might have a tasty name, but they can be pretty poisonous when they’re quietly tracking your online behavior. A new Google machine learning initiative aims to replace the rancid cookies with a privacy-first alternative. The search giant calls the system Federated Learning of Cohorts (FLoC). FLoC (pronounced “flock”) allows businesses to send ads to groups of potential customers rather than specific individuals. The system uses machine learning algorithms to create clusters of people with similar browsing habits. All the data analyzed by the algorithms — including your web history — is kept private on the browser and not uploaded anywhere else.… This story continues at The Next WebOr just read more coverage about: Google

Responses

Your email address will not be published. Required fields are marked *

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about chronicles from TreatMyBrand directly in your inbox