Is neuroscience the key to protecting AI from adversarial attacks?


Deep neural networks are a key component of computer vision applications, from video editors to medical software and self-driving cars.Read More

Related Articles

Deep learning models DON’T need to be black boxes — here’s how

Deep neural networks can perform wonderful feats thanks to their extremely large and complicated web of parameters. But their complexity is also their curse: The inner workings of neural networks are often a mystery — even to their creators. This is a challenge that has been troubling the artificial intelligence community since deep learning started to become popular in the early 2010s. In tandem with the expansion of deep learning in various domains and applications, there has been a growing interest in developing techniques that try to explain neural networks by examining their results and learned parameters. But these explanations are often erroneous and misleading, and… This story continues at The Next Web

AI generates trippy music video inspired by 50,000 album covers

A Spanish artist has created a trippy music video by training a deep-learning algorithm on thousands of album covers. Bruno López produced the video by using a combination of Spotify data, Python scripts, and Generative Adversarial Networks. He first created a script that downloaded the album covers for every track featured on Spotify’s official editorial playlists. The result was a dataset of around 50,000 covers with individual resolutions of 640 x 640. This was used as training data for Nvidia’s StyleGAN2 architecture, the same system that previously brought us pizzas, fursonas, and My Little Pony characters that don’t exist. After several days of training,… This story continues at The Next Web

A closer look at the AI Incident Database of machine learning failures

The failures of artificial intelligent systems have become a recurring theme in technology news. Credit scoring algorithms that discriminate against women. Computer vision systems that misclassify dark-skinned people. Recommendation systems that promote violent content. Trending algorithms that amplify fake news. Most complex software systems fail at some point and need to be updated regularly. We have procedures and tools that help us find and fix these errors. But current AI systems, mostly dominated by machine learning algorithms, are different from traditional software. We are still exploring the implications of applying them to different applications, and protecting them against failure needs new… This story continues at The Next Web

Responses

Your email address will not be published. Required fields are marked *

Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about chronicles from TreatMyBrand directly in your inbox