ARKit: How Apple’s iOS AR Works

ARKit: How Apple’s iOS AR Works

Apple announced at yesterday’s Worldwide Developer Conference (WWDC) that its next iteration of iOS (11) will include Augmented Realityfunctionality, with the launch of ARKit.

Since the reveal, we’ve found out a little more about the software. The new framework has been designed for ease of use and in Apple’s words, ‘ARKit takes apps beyond the screen, freeing them to interact with the real world in entirely new ways.’

With the requirement to place digital objects in an actual environment, it’s essential to be able to map the scene in detail. To this end, ARKit uses Visual Inertial Odometry (VIO) to accurately track the world around it. VIO fuses camera sensor data with CoreMotion data and it’s these two inputs that allow the device to sense how it moves within a room with a high degree of accuracy, without any additional calibration.


Plane Speaking

Both iPhone and iPad can use ARKit to interpret the scene presented by the camera view and find horizontal planes such as tables and floors. The system can also track and place objects on smaller feature points as well. ARKit also makes use of the camera sensor to estimate the total amount of light available in a scene and applies the correct amount of lighting to virtual objects.

ARKit runs on the Apple A9 and A10 processors, with optimisations in Metal and SceneKit.


To understand more about how to work with ARKit, Apple has posted documention on its developer pages:

  • Understanding AR
  • Configurations
  • Building A Basic AR Experience
  • Displaying An AR Experience With Metal
  • Real-World Objects and Positions
  • Camera and Scene Details


Managing Editor

Steve is an award-winning editor and copywriter with nearly 25 years’ experience specialising in consumer technology and video games. He was part of a BAFTA nominated developer studio. In addition to editing, Steve contributes to,, and, as well as creating marketing content for a range of SMEs and agencies.