Everything You Need To Know From F8

Everything You Need To Know From F8

More than 4,000 delegates gathered at McEnery Convention Center in San Jose, California this week for Facebook’s 10th annual developer conference, F8. Across the two days of 18th and 19th April, we were given insights into the company’s progress to date and future roadmaps for VR, AR and AI.

A huge proportion of the keynotes were dedicated to AR and VR, covering everything from Camera Effects and social Spaces to computer learning and AI. We saw announcements and launches - often simultaneously - as well as first insights into mind-blowing research work.

There was a lot to take in and we know that not everybody has the time to be across all the news from F8, so we've distilled the key points into this executive summary – click through for more detail:


Mark Zuckerberg, CEO: Opening Address 

  • Photos and videos are phase one of Facebook’s immersive media plan
  • AR is phase two
  • AR received more attention than VR
  • “We’re using primitive tools because we’re still on the journey to building better ones.”
  • ‘Simultaneous Location And Mapping’ (SLAM) enables them to create 3D scenes from 2D images
  • A new genre of AR games will come this year


Mike Schroepfer, CTO: AI 

  • AI will be essential in blending the real world with the virtual
  • Mask R-CNN, launched last month, can easily remove moving objects from video
  • Facebook is working on making AI functional on smartphones


Deb Liu, VP of Platform & Marketplace: Camera Effects 

  • Camera Effects consists of two tools
  • Frame Studio announced and launched at F8, enables users to add frames to photos
  • AR Studio announced and released to beta at F8, enables users to add effects and elements to videos
  • EA used AR Studio to create an AR information environment for Mass Effect: Andromeda
  • Hand tracking to come
  • Full body tracking to come


Rachael Franklin, Head of Social VR: Social VR 

  • Facebook Spaces is Facebook’s first social VR experience, announced and launched at F8
  • Spaces integrates with Messenger, enables VR to flatscreen video calls
  • Avatars can be generated from your photos on Facebook


Mike Schroepfer, CTO: Roadmap 

  • Facebook’s 10-year roadmap is based around three main pillars: Connectivity, AI and VR/AR
  • Advances in AI evidenced by more natural photo auto-captions on Facebook
  • “We’re maybe the only company investing in VR across the spectrum.”
  • X24 and X6 360 video and photo cameras in partnership with Flir announced at F8
  • Pixel overlap and computer vision can create a per-pixel depth map with new perspectives
  • No proprietary back-end systems


Joaquin Quiñonero Candela, Applied Machine Learning Director: AI 

  • “We are taking AI out of the data centre and into the phone”
  • Tech can identify who each pixel belongs to in a video
  • Speech can be translated
  • “Our algorithms crunch over one billion frames of video every second”
  • AI has been integrated into the Facebook platform
  • New open-source deep learning framework called Caffe2, speeds up neural site transfer by 100 times on phones, making it possible to run locally and in real-time
  • “This is one of the world’s largest AI deployments ever”


Michael Abrash, Chief Scientist of Oculus Research: AR 

  • AR and virtual computing “Will be one of the great transformational technologies of the next 50 years”
  • In 20 years, we will wear AR glasses as much as we carry smartphones today
  • VR and AR will develop independently in the short-term
  • VR and AR will eventually become interchangeable
  • AR will not be restricted to vision and will encompass all the senses
  • “Every area requires technology that is beyond today’s state of the art”
  • “People with AR glasses will be the smartest, most productive, best-connected people around”


Regina Dugan, VP of Engineering and Head of Building 8: Thought Control Interfaces 

  • “What if you could type directly from your brain?”
  • “It’s just the kind of fluid human-computer interface needed for AR”
  • Surgical implants now achieving typing speeds of 8wpm, target is 100.
  • System monitors signals in the speech centre of the brain
  • Potential for language translation based on concepts, not words


Every single speaker reminded us that this technology is in its infancy and will take many years to mature.

Managing Editor

Steve is an award-winning editor and copywriter with nearly 25 years’ experience specialising in consumer technology and video games. He was part of a BAFTA nominated developer studio. In addition to editing, Steve contributes to,, and, as well as creating marketing content for a range of SMEs and agencies.