Microsoft Virtual Academy
Get 50% off eBooks plus
40% off print books at
The Microsoft Press Store
Devs, are you looking forward to building apps with Kinect for Windows v2? In this Jump Start, explore the brand new beta Software Development Kit with experts from the Kinect engineering team, and see how Kinect v2 enables speech, gesture, and human understanding in applications and experiences.
Learn about the new APIs and app model, and see fascinating demos and samples (plus source code) for both desktop and Windows Store apps. Get the details on Kinect Fusion (real-time 3D modeling), Face Tracking, and Visual Gesture Builder. Discover the new sensor technology, natural user interface (NUI), accessibility potential, and practical applications.
Even if you don't have a Kinect device, you won't want to miss this entertaining event. The instructors even show you how you can start building an app without a sensor. Instructor | Ben Lower – Program Manager; Rob Relyea – Program Manager
Get an introduction to Kinect for Windows and what it can do, hear how to get and install the SDK, and see how to use included sample code to get started.
Kinect has various data sources available, including Infrared, Color, Depth, Body, and Audio. Learn how to use all of the sources, and go deep into the programming model. Find out how to add Kinect support to a Windows Store application and how to start creating an application from scratch.
Kinect provides new ways to interact with applications using gesture or voice. Experts explain how they created the hand cursor, PHIZ, and user interactions, and they go through sample code to show how this can be leveraged in your applications. Also, learn about how to speech-enable your application.
Find out how a Unity3D plugin allows you to use Kinect data and APIs in your Unity game or application. And learn about Kinect Common Bridge, along with support for Cinder and open Frameworks. See sample applications, too.
Get the details on Kinect Fusion, which allows the sensor to be used to scan and create 3D models of people or objects. And hear how the Face APIs enable applications to detect faces in the scene, capture the user’s likeness, and produce detailed, 3D face models.
Find out about Kinect Studio, which allows sensor data to be recorded and played back (helpful when developing and debugging your application). Explore different approaches to creating custom gestures, and discover the new Gesture Builder tool.
Walk through two case studies, and take a deep dive into the joint filtering work done in the platform to have a stable hand cursor. The module includes a depth-based approach to gesture detection and presents the advantages this can have over a skeleton joint-based approach.
The information in this module provides you with an opportunity to dive deeper into Programming Kinect for Windows v2, at your own pace.