Catch us at the Intel Developer Forum this week in Shenzhen

04-13-2016

This has been an exciting year for Segway Robot, ever since we publicly showcased it at the Consumer Electronics Show in Las Vegas a few months ago. In January, the Segway Robot group (supported through a joint engineering and strategic alliance with Intel Corporation and Xiaomi Technology) announced a round of funding from prestigious investors including GIC. Since then, we have been working tirelessly to bring our vision to life, but we cannot do it alone. Segway Robot is designed as an open platform, where hardware and software developers can join together to realize the full potential of the project.

This is where all of you come in. We want to hear proposals from developers and organizations who are as excited as us about how Segway Robot can help improve people’s day-to-day lives. We want to work with you to extend the usefulness and functionality of the robot so that its vision of truly becoming a transformable companion can be realized by the time it launches publicly.

To that end, here are some updates from our side, and we hope to continue to share more updates, ideas and sneak peeks through this blog over time.


We’ll be heading to the Intel Developer Forum (IDF) in Shenzhen this week.

As you know already, the Segway Robot is a self-balancing vehicle that can transform between a scooter and an autonomous robot, making it an extension of the human body – where it can hear, see and perceive the world.

We’re happy to share the news that our Vice President, Pu Li, will deliver a speech during the RealSense™ Tech Session of Intel® Developer Forum on April 13th, 2016. During this session, Pu Li will be joining Intel’s VP and GM of Perceptual Computing, Achin Bhowmik.

The Intel Developer Forum is a gathering of technologists, developers, journalists, scientists, and other tech enthusiasts. Everyone converges to discuss Intel products and other products based on Intel technologies, such as the Segway Robot, which is powered by Intel’s RealSense™. The RealSense™ technology is changing how humans interact with computing devices, through human-like “senses” for increasing their perception of the world. Through RealSense™, Segway Robot can collect depth images for reconstructing obstacles in the surrounding area, and track its movement together with other types of sensors. Besides vision and object avoidance, the robot’s incredible skill set also includes mobility and speech. In robot mode, it is able to navigate complex environments, follow a person, recognize faces, speak, take photos and videos automatically, and more.

We’ll be sharing more information and photos from the event on our social media profiles on Facebook and Twitter.
Some notes about our upcoming SDK and how you can get involved
Segway Robot will be introduced to the market in very close collaboration with the developer community at large. Any third party developers and companies can extend the functionality of the robot through the hardware extension bay and SDK interfaces. Since announcing the SDK program in January, we have received more than 1,200 applications from enthusiastic makers and creators around the world, for verticals including education & research, consumer services, medical care and elderly care, telepresence and Artificial Intelligence.

We’re on track to launch the alpha version of the SDK in September (selected participants will be notified by June), for which we’ll be working very closely with a small number of developers and organizations to bring their ideas to fruition. Based on how the alpha phase goes, we’ll be introducing the beta version SDK in December, 2016, to a larger set of developers. Due to limited resources in development phases, we will not be able to support all developers. However, even if a particular concept is not planned for the testing of alpha or beta versions of the SDK, we’ll still be opening up the SDK to the broader community thereafter, so please do not hesitate to share your interest or ideas with us through the form on our website: http://www.segwayrobotics.com

The modules that will be offered in the SDK will include:

  1. Vision Library – such as obstacle identification and face detection
  2. Voice Library – such as voice command recognition, far-talk enhancement, text-to-speech
  3. Robot Control Library – to allow developers to program the robot’s responses according to the inputs from the user
  4. UI Library – to allow developers to create multi-media expressions of the robot
  5. Remote Protocol Library – to provide a communication mechanism between the robot and other devices

All accepted developers in our program will have direct access to support and advice from our team. We can’t wait to see what interesting ideas you can come up with to allow the robot to perform new applications and to interact with other devices. We believe that no matter how advanced our technology is; we can only have a positive affect if we can collectively aim at improving the quality of human life by making it richer and more meaningful. Join us on this journey.