- Joined
- Sep 5, 2013
- Messages
- 22,408
- Reaction score
- 20,367
- Channel Type
- Guru
- Google is training its AI using what it terms atomic visual actions or AVAs
- These are three-second clips of people performing everyday actions
- Google predicts it could lead to machines that can predict human behaviour
- It could also help advertisers tailor their campaigns to actions people respond to
Google is training its AI using what it terms atomic visual actions (AVAs).
These are three-second clips of people performing everyday actions, from walking and standing up to kicking and drinking from a bottle.
Google says it sourced the content from a variety of genres and countries of origin, including clips from mainstream films and TV, to ensure a wide range of human behaviours appear in the data.
Writing in a blog post, Google software engineers, Chunhui Gu and David Ross, said: 'Teaching machines to understand human actions in videos is a fundamental research problem in Computer Vision.
'Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognising human actions still remains a big challenge.
'This is due to the fact that actions are, by nature, less well-defined than objects in videos.
'We hope that the release of AVA will help improve the development of human action recognition systems.'
Full article and source: http://www.dailymail.co.uk/sciencetech/article-5012271/Google-AI-binge-watching-YouTube-learn-humans.html#ixzz4wR16TP1A
________________________
Pretty cool right? ^^