senckađ
Group745
Group745
Group745
Group745
Group745
Group745
Trends and Insight in association withSynapse Virtual Production
Group745

The Auckland Face Simulator Defines Next-Gen Facial Animation

16/06/2015
Publication
London, UK
495
Share
Imagine a machine that can laugh and cry, learn and dream, and can express its inner responses to how it perceives you to feel…

The Laboratory for Animate Technologies based at the Auckland Bioengineering Institute, The University of Auckland New Zealand, is creating ‘live’ computational models of the face and brain by combining bioengineering, computational and theoretical neuroscience, artificial intelligence and interactive computer graphics research. (Golly some big words there.)

Led by Academy Award winner associate professor Mark Sagar, who previously worked as the special projects supervisor at Weta Digital, the team is developing multidisciplinary technologies to create interactive autonomously animated systems that will define the next generation of human computer interaction and facial animation.

Involved with the creation of technology for the digital characters in blockbusters such Avatar, King Kong and Spiderman 2, Sagar’s pioneering work in computer-generated faces was recognised with two Oscars at the 2010 and 2011 Sci-tech awards, a branch of the Academy Awards that honour movie science and technological achievements. But more on Sagar later.

What is the Auckland Face Simulator?

The Auckland Face Stimulator is being developed to cost effectively create highly realistic and precisely controllable models of the human face and its expressive dynamics for psychology research and advanced human computer interaction (HCI). The faces can be precisely controlled by individual muscle movements and speech can also be driven by real or computer generated voices.

The Auckland Face Simulator uses the same engine as BabyX – the lab’s virtual infant that can see, hear, read expressions, learn from experience and express its emotions.

On a big picture, Sagar says it’s about bringing technology to life – how we interact with machines and how do machines express themselves to us, as the human centric component of this has been largely neglected. So applications exist in virtual assistant technology, education, healthcare and even cognitive computing.

“It can also work with literally any face,” Sagar said.

“We specialise in highly realistic faces, but we can also literally animate coke bottles or cardboard boxes (see pic) to literally embody brand personality,” he explained.

“The face simulator can be used for any Human Computer Interface – or digital signage (large interactive billboards, shopping mall displays, kiosks etc – even stadium displays).”

With a core team of seven people in the lab along with a commercial spin-out, for custom jobs, such as digital airport check in assistant Xyza, Sagar says each digital face of this quality (from start to rendering) takes about six weeks to complete, depending on the face, hair etc.



A Little Background on BabyX

BabyX is about exploring theories and models of embodied cognition, and provides an embodiment (albeit virtual) that has the power of human expression.

This summer in New Zealand, Sagar will be revealing BabyX version 4 at MOTAT, a technology museum in Auckland. The public will interact with BabyX and can teach her words.

“We are working with Auckland University developmental psychologist Dr. Annette Henderson, to explore the details of early social learning, and one of the things we hope to achieve is to educate the public about how important the way in which they teach a child matters, as they can see that the way in which they interact with BabyX will affect her learning,” he said.

“We will also be able to use BabyX version 4 as a ‘Virtual Turing Test’ for models of early behaviour - to see how well the model can elicit natural responses from the caregiver. We will be increasing the sophistication of these models over time, and we can explore the effect each aspect of the behaviour has in isolation.”



How Can Adland Adopt This Technology?

Sagar says the technology can be used for digital doubles but he also thinks the really interesting future is in interactive digital humans and characters, whether it is a celebrity – or say Gollum!

“What say Gollum sells you a movie ticket? Or teaches you a lesson – Edutainment is another area I think could be a great and beneficial use of the technology. When my real daughter saw her digital version read a word that she couldn’t it triggered something very deep. I think there is something very powerful here.”

He also thinks the technology has myriad uses in the world of adland.

“Advertising is about emotion – and expression, and think how many adverts involve faces! Once you add interaction, you now have a dynamic emotional connection between a person and (say) a billboard. A billboard can see you in a crowd and smile.”

Sagar says it can even be used in areas that traditionally use single image photography.

“Think of the effort which goes into creating visual advertising for example – getting just the right look on a model’s face – with this technology this can be changed on the fly, even adapting its behaviour. It can bring digital signage to life, in both subtle or highly expressive ways.”

How Do You Get So Smart So Young?



Sagar’s background is in both art and science (his mother was an artist and his father a technologist). He did a Ph.D. in Bioengineering, developing a system to create virtual models of anatomy. This led to creating models of the face (a complicated piece of anatomy) and then later to creating digital humans.

What was his first job?

“Well my first actual job was a Santa Claus for a shopping mall but that’s another story! My first proper job was a Post-Doctorate at M.I.T. in Boston, and then in Los Angeles I was co-director of R&D at Pacific Title Mirage, then LifeFX Inc., and then later special projects supervisor at Sony Imageworks and Weta Digital.”

He says Weta Digital was a fantastic place to work because it was full of talented folk.

“My most enjoyable times on films like King Kong and Avatar involved (sometimes all in one day) the combination of working with the actors, directors, artists and developers, which gave insights into how artistic visions where articulated, then manifested on stage and then digitally transformed into a creature which appeared to have a soul.”  

Whilst he was always interested in consciousness, Weta really got him thinking about how it could be possible to make a character animate itself.

“Of course the answer is it needs to have a nervous system, and a face and body to interact and express itself, and intrinsic motivation and learning. This was such an irresistible challenge that I had to set up a lab to pursue it.” (As you do.)

When asked what inspires him, Sagar says nature, science, technology and art but his most interesting answer is “the face and brain”.

“I see the face as the interface – how does our subjective internal world connect with objective reality?”

However, his overriding interest in this work is, as you may have guessed, is philosophical.

“How do we tick? This work is about combining biologically motivated models of expression, emotion, learning, with technology to try to make a large 'functioning sketch' of human behaviour as a process, so as to better understand our nature.”

He really wants to know if a model can be constructed in such a way as to effectively have free will?

“Will we ever be able to model something as elusive as consciousness? Is it possible to make a teachable machine? Is it possible to make a machine that can express itself, and have an imagination? A goal is to have BabyX dream about her experiences and be able to visualise her synthetic thoughts!”

We salute you Sagar.

Credits
Work from LBB Editorial
Fuck the Poor Case Study
The Pilion Trust
19/04/2024
7
0
ALL THEIR WORK