AI & ML

3D Ken Burns Effect From Single Image

Researchers at Portland State University and Adobe have demonstrated being able to generate a 3D Ken Burns effect (parallax) using a single image. The system uses neural networks to generate depth predictions and object boundaries, and context aware in-painting to generate the missing pieces of the video to simluate a moving point of focus. We can tick another Star Trek TNG sci-fi concept off the list.

AI Voice Mimic Heist

AI generated voice mimicking software was used to persuade a director of a German energy subsidiary in the UK that their boss was on the phone and allowed theives to order the director to transfer funds to their bank account! This is believed to the first voice-AI assisted theft, so convincing that the director in question who made the transfer said that the software even imitated the tonality of the boss' voice.

Trivago ML Image Labelling

Trivago deployed machine learning to present images of hotel spas when a user searched hotels with spas to improve user experience. This blog post describes how they tweaked pre-trained convolutional neural networks (CNNs) to label 100+ million hotel images in order to display spa realted images during contextual searches.

Latent Knowledge Identificaiton Using ML

Researchers used machine learning to analyse 3.3 million material science abstracts from 1922 to 2018. They found that the ML system captured fundamental knowledge within the field and also historically identified new materials and research to study before the new materials were discovered in real life. The research shows how machine learning can be used to identify latent knoweldge more quickly.

Lyft Self Driving Dataset

Lyft have released a huge self driving level 5 dataset comprising 55,000 human labelled 3D annotated frames, a driveable surface map and an underlying spacial semantic map to contextualise the data. The release of the dataset is part of a competition with a prize of $25,000, aimed at researchers to help Lyft train AI algorithms to help them reach their goal of a Level 5 (fully automated) self driving car.

Hololens Hologram Language Transaltion

Julia White from Microsoft Azure Marketing demonstrated the use of Hololens 2 to project a motion captured holograph of herself speaking in Japanese using her own speech patterns. The demonstration was created using Azure mixed reality to record the hologram, Azure text to speech and translate to create the spoken content and Azure neural text to speech technology to imprint her speech patterns in Japanese.

Pages

Life Changing Smart Thinking Books

Subscribe to RSS - AI & ML