What is Spatial?

I am proud to introduce my new blog: All Things Spatial. This blog will cover the topics of spatial computing, computer vision, deep learning, and n-D linear algebra. I will share my thoughts on the latest research and products in these domains. I will also share my personal projects and experiments. I hope you will enjoy reading this blog as much as I will enjoy writing it.

I started my career in Physics and I have always loved to learn about the mechanics of the real world. Borned in Cameroon, Central Africa, computers through the information age gave me the tools to understand the world and build digital services for millions of users. Now, I want to train computers to understand the world. I have a deep interest in building artificial systems that automatically learn and operates against the physical world. This is why I co-founded Selerio in 2016, an SDK that exposes semantic anchors for mobile AR.

Spatial computing is the digitization of spatial relationships between machines, people, objects, and environments. Put simply, this is the domain of computers that understand and interact with the real world; our physical space. Examples of commercial applications are Augmented Reality (AR), self-driving cars, humanoids. In AR, the artificial system interacts with the physical world through the mobile phone (e.g., Snapchat filters) or through AR glasses (e.g., Hololens US Army training). In Robotics interactions, the artificial system moves a physical objects through space; e.g., FSD Teslas, Boston Dynamics dogs. We have increasingly seen these use cases during recent years. This is the new era of technology that I am passionate about. Artificial systems that interacts with real world, just like humans. Writing this technical blog will help me explore these systems and how they impacts humanity.

Tenets

The thoughts shared in this blog will be guided by the following tenets:

  • Productivity. Technology creates tools to serve humans. I will avoid conjecture and focus on solving real-word problems for humans.
  • Practicality. I will rarely cover early research. I would like to discuss topics that have a realistic path to production. This is a deeply tecnical blog but not a research review blog.
  • Simplicity. I will use first principle thinking. Large systems that work are often built from simple systems that work. I will avoid black boxes and focus on the core ideas.

Why Now?

Hardware

From Lidar sensors to waveguided displays, there has been a lot of progress in hardware. Products like teleportation that were first introduced 60+ years ago in Star Trek seem possible today. Check-out project Starline from Google. The following are some of the key hardware components that are used in spatial computing.

  • Sensing technology. Cameras, LIDAR, Radar, Ultrasonic, Infrared, etc.
  • Computing. CPU, GPU, FPGA, ASIC, etc.
  • Actuators. Motors, servos, etc.

Software

Traditional computer vision algorithms like SLAM, marching cubes, ICP have been deployed in the real world. Early versions of self-driving cars such as Cruise were mostly implemented using Kalman filters. Even though these algorithms only worked in specific conditions, they were able to solve real-world problems. The introduction of AI has enabled more generic, robust solutions.

AI

We are clearly in the AI everywhere period in human evolution.

AI Timline

The information age has been a period of data explosion. We have been able to capture and store more information than ever before. The next period is the period of AI explosion. With effectively inifinite data, we are able to build and use a new breed of digital tools. Computers are increasingly able to understand the world around us, and interact with it. This will be the period of spatial AI.

I am excited to explore all the topics just introduced and much more.