VR games are in an adolescent stage of development. Gimmicky visual effects are common and features for novelty’s sake abound. Because of the lack of standardization, developers add eye catching VFX to direct the user’s attention to features they want to tutorialize. If any of this sounds familiar, it might be because web development went through a similar pattern. Before standardization visual effects such as animated hit tickers and blinking ‘new’ GIFs were used to draw users attention. Eventually standardization in layout and controls led to a cleaner experience. VR development has yet to benefit from widespread standardization. Movement controls, selection mechanics, and guiding attention are tackled individually by each developer adding to the burden of tutorialization of not only the game but also the platform. This talk will speak to what we can predict for the course of VR development based on similar patterns of standardization from other platforms.
Theresa Duringer is CEO of Temples Gates Games, a San Francisco based game studio creating award-winning games for Gear, Rift and Vive. Her team is known for Bazaar, Ascension VR, and Race for the Galaxy as well as integrating machine learning AI to challenge even the most experienced players. With a focus on delivering seamless UX with high performance, she uses their custom C++ engine to create flexible solutions for the challenges of delivering games across platforms. Prior to founding Temple Gates Games, she co-created Cannon Brawl and contributed on the Sims, Sim City, and Spore at Maxis.
Prof. David Silver
Deep Reinforcement Learning from AlphaGo to AlphaStar
Recently, self-learning systems have achieved remarkable success in several challenging problems for artificial intelligence, by combining reinforcement learning with deep neural networks. In this talk I describe the ideas and algorithms that led to AlphaGo: the first program to defeat a human champion in the game of Go; AlphaZero: which learned, from scratch, to also defeat the world computer champions in chess and shogi; and AlphaStar: the first program to defeat a human champion in the real-time strategy game of StarCraft.
David Silver leads the reinforcement learning research group at Google DeepMind. David graduated from Cambridge University in 1997 with the Addison-Wesley award. Subsequently, David co-founded the video games company Elixir Studios, where he was CTO and lead programmer, receiving several awards for technology and innovation. David returned to academia in 2004 to study for a PhD on reinforcement learning with Rich Sutton, where he co-introduced the algorithms used in the first master-level 9x9 Go programs. David was awarded a Royal Society University Research Fellowship in 2011, and subsequently became a professor at University College London. David consulted for DeepMind from its inception, joining full-time in 2013, where he leads the reinforcement learning team. David co-led the Atari project, in which a program learned to play 50 different games directly from pixels (Nature 2015). He is best-known for leading the AlphaGo project, culminating in the first program to defeat a top professional player in the full-size game of Go (Nature 2016), as well as the AlphaZero project (Nature 2017), in which a program learned by itself to defeat the world's strongest chess, shogi and Go programs (Science 2018). These achievements have been recognised by awards such as the Marvin Minsky Medal, Royal Academy of Engineering Silver Medal, Mensa Foundation Prize, Cannes Lion Grand Prix and several best paper awards.
Microsoft Research Cambridge
Minecraft as AI Playground and Laboratory
This talk focuses on Project Malmo, an AI experimentation platform that my team built on top of the popular video game Minecraft. I will show how the open-ended nature of Minecraft, which is so appealing to its human fan-base, also makes the game uniquely challenging for current AI agents. I will highlight some of opportunties this creates for driving AI research towards faster learning, complex decision making, and - ultimately - collaboration with human players. Looking out into the future, I will discuss directions for tackling these challenges, from learning with human priors to multi-task learning.
Dr. Katja Hofmann is a Principal Research Manager at the Game Intelligence group at Microsoft Research Cambridge, UK. There she leads a research team that focuses on reinforcement learning with applications in modern video games. She and her team strongly believe that modern video games will drive a transformation of how we interact with AI technology. One of the projects developed by her team is Project Malmo, which uses the popular game Minecraft as an experimentation platform for developing intelligent technology. Katja's long-term goal is to develop AI systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.
Before joining Microsoft Research, Katja completed her PhD in Computer Science as part of the ILPS group at the University of Amsterdam. She worked with Maarten de Rijke and Shimon Whiteson on interactive machine learning algorithms for search engines.
Jon Paul Schelter
Team Lead Programmer, Ubisoft Toronto
Starlink: The Opportunity Machine
Open World games typically require a large team of game content authors to fill large worlds with varied experiences for the player. Linear Narratives are often presented in Open World games with story beats overlaid on the game world.
In Starlink: Battle For Atlas, the team at Ubisoft Toronto used offline Procedural Content Generation to create seven planets and populated them with a variety of Non-Playable Characters and world elements. The actions of over 100,000 units were continuously simulated throughout the game, and interesting situations were detected and presented dynamically to the player as missions. In addition, Linear Narrative Sequences were seamlessly integrated with the simulation, to be explored at will by the player.
In this talk, Ubisoft Toronto team lead programmer Jon Paul Schelter will discuss how simulation is an opportunity machine, and the development of AI and a Simulation that allow us to present a challenging, varied, and fun experience to the player.
Jon Paul Schelter is a Team Lead Programmer at Ubisoft Toronto. As the AI Team Lead on action-adventure game Starlink: Battle for Atlas, he led the team responsible for the populating and simulating the procedurally generated worlds, and is focused on compelling NPC behaviour in the intersection between systemic and more traditional, crafted narrative approaches to games.
Jon Paul brings 23 years of game development experience to Ubisoft Toronto, where he previously worked on Splinter Cell: Blacklist and Assassin's Creed Unity, and is currently working on an unannounced project. A graduate from Queen’s University with a degree in Physics and Computer Science, Jon Paul first began programming games on the Ti-99/4A, and his first published titles include The Crow: City of Angels and The Perfect Weapon for the Saturn and PS1. He joined Matrox Graphics as a member of the driver and GPU architecture teams, representing Matrox at the OpenGL ARB where he contributed to several versions of the standard as well as to the vertex and fragment programming extensions. Prior to joining Ubisoft Toronto, Jon Paul was a programmer at Rockstar Games, where he was responsible for the Animation, 3Cs and Combat systems on the highly acclaimed PS2/Xbox game The Warriors, and later a founding member and Technical Director at Bedlam Games in Toronto, focused on AAA and online competitive games.
Chief Product Officer, Spirit AI
Practical Game Design Around Natural Language Understanding
Spirit AI’s Character Engine is designed to support games that understand natural language input and generate natural language output. Through several case studies from Spirit's work in CE and collected data about user engagement, as well as from her own creative practice, and from emerging design wisdom from designers of conversational games for Alexa and smart speakers, Emily addresses how currently practical AI techniques can be deployed to center gameplay on language input and language output.
She shares the results of projects that combine intent recognition and question classification, sentiment analysis, and custom metrics of user confusion and excitement, to produce specific gameplay effects. She also discusses how we can deploy UI feedback, controlled hinting, and game design techniques to improve the player's experience of AI.
Emily Short is Chief Product Officer and an executive director at Spirit AI, where she works with natural language processing and generation, character moods and social behavior, and conversation modelling. She was previously the creative director of the Versu project, building interactive iPad stories around AI character agents. She has an extensive background in games writing and narrative design as well as tooling for writing interactive fiction. Her blog can be found at http://emshort.blog.
Honorary Professor, University of Essex
How to Be a God
Humans have been creating realities, in the form of virtual worlds, for 40 years. Because we control their physics, we are the gods of these worlds. Because of the present state of AI, the non-player characters in these worlds are neither self-aware nor sapient. However, if we look ahead 400 years, or 40,000 years – take as many years as you like – we will be able to make free-thinking inhabitants of our worlds.
In this talk, I look at some of the moral and ethical questions which this ability will raise. I provide no answers, just implications. It is for individual audience members to make up their own minds as to what is the right thing to do. Is it OK to switch such realities off? Would we implement death and suffering? Should we be pleased or upset if our creations created their own sub-realities?.
I end by inviting the audience to consider the possibility that our own reality is itself a sub-reality of a higher reality. Is your own analysis of what is 'right' when creating realities in line with how any designer of our own reality appears to have acted?
Dr Richard A. Bartle is Honorary Professor of Computer Game Design at the University of Essex, UK. He is best known for having co-written in 1978 the first virtual world, MUD, and for his Player Types model which has seen widespread adoption by the MMO industry. His 2003 book, Designing Virtual Worlds, is the standard text on the subject, and he is an influential writer on all aspects of online design and development. In 2010, he was the first recipient of the prestigious Game Developers Choice award of Online Game Legend.
Managing Director, ESL UK
James Dean is the founder and Managing Director of the ESL UK subsidiary, building the ESL brand and community on a regional level. James began his career in IT writing for UK’s largest computer magazine while studying for a Degree in Computer Science at Birmingham University. After graduating James managed the PC division of a well-known computer retailer in the UK and in 2004 launched a gaming brand of PCs increasing average margins from 6% to 20%.
In 2006, James founded a sales, marketing and PR company specialising in services for the IT and gaming industry, representing brands without existing operations in the UK. Within 3 years, the agency was generating over £5m annual revenues for various niche gaming brands including SteelSeries, the well-known esports peripheral brand.
This shaped the foundation of the ESL UK franchise after introduction to ESL Germany in 2012 which was eventually purchased by ESL’s global group in 2016. ESL is the world’s largest esports company with 15 offices spanning the globe. The UK office has maintained 50%+ year-on-year growth since it’s foundation. It operates the UK’s longest running national esports tournament, the ESL Premiership currently spanning 4 gaming titles. To this date, ESL still holds records for the largest ever online audience for a UK esports tournament.