Second version of HoloLens HPU will incorporate AI coprocessor for implementing DNNs

HPU_2.0_1260x539-1024x438.png

Posted July 23, 2017 | by Microsoft Research Blog

By Marc Pollefeys, Director of Science, HoloLens

It is not an exaggeration to say that deep learning has taken the world of computer vision, and many other recognition tasks, by storm. Many of the most difficult recognition problems have seen gains over the past few years that are astonishing.

Although we have seen large improvements in the accuracy of recognition as a result of Deep Neural Networks (DNNs), deep learning approaches have two well-known challenges: they require large amounts of labelled data for training, and they require a type of compute that is not amenable to current general purpose processor/memory architectures. Some companies have responded with architectures designed to address the particular type of massively parallel compute required for DNNs, including our own use of FPGAs, for example, but to date these approaches have primarily enhanced existing cloud computing fabrics.

But I work on HoloLens, and in HoloLens, we’re in the business of making untethered mixed reality devices. We put the battery on your head, in addition to the compute, the sensors, and the display. Any compute we want to run locally for low-latency, which you need for things like hand-tracking, has to run off the same battery that powers everything else. So what do you do?

You create custom silicon to do it.

First, a bit of background. HoloLens contains a custom multiprocessor called the Holographic Processing Unit, or HPU. It is responsible for processing the information coming from all of the on-board sensors, including Microsoft’s custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera. The HPU is part of what makes HoloLens the world’s first–and still only–fully self-contained holographic computer.

Today, Harry Shum, executive vice president of our Artificial Intelligence and Research Group, announced in a keynote speech at CVPR 2017, that the second version of the HPU, currently under development, will incorporate an AI coprocessor to natively and flexibly implement DNNs. The chip supports a wide variety of layer types, fully programmable by us. Harry showed an early spin of the second version of the HPU running live code implementing hand segmentation.

The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery. This is just one example of the new capabilities we are developing for HoloLens, and is the kind of thing you can do when you have the willingness and capacity to invest for the long term, as Microsoft has done throughout its history. And this is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent. Mixed reality and artificial intelligence represent the future of computing, and we’re excited to be advancing this frontier.

Source: https://www.microsoft.com/en-us/research/blog/second-version-hololens-hpu-will-incorporate-ai-coprocessor-implementing-dnns/

AI for security: Microsoft Security Risk Detection makes debut

Full details at: – https://blogs.microsoft.com/next/2017/07/21/ai-for-security-microsoft-security-risk-detection-makes-debut/

Microsoft is making a cloud service that uses artificial intelligence to track down bugs in software generally available, and it will begin offering a preview version of the tool for Linux users as well.

Microsoft Security Risk Detection, previously known as Project Springfield, is a cloud-based tool that developers can use to look for bugs and other security vulnerabilities in the software they are preparing to release or use. The tool is designed to catch the vulnerabilities before the software goes out the door, saving companies the heartache of having to patch a bug, deal with crashes or respond to an attack after it has been released.

Free course on Deep Learning for Self-Driving Cars

self drive cars.pngA free course and introduction to deep learning through the applied task of building a self-driving car. Taught by Lex Fridman.

Visit http://selfdrivingcars.mit.edu/ for full details of “MIT 6.S094: Deep Learning for Self-Driving Cars“.

Data Science: Performance of Python vs Pandas vs Numpy

Re-post from http://machinelearningexp.com/data-science-performance-of-python-vs-pandas-vs-numpy/

Speed and time is a key factor for any Data Scientist. In business, you do not usually work with toy datasets having thousands of samples. It is more likely that your datasets will contain millions or hundreds of millions samples. Customer orders, web logs, billing events, stock prices – datasets now are huge.

I assume you do not want to spend hours or days, waiting for your data processing to complete. The biggest dataset I worked with so far contained over 30 million of records. When I run my data processing script the first time for this dataset, estimated time to complete was around 4 days! I do not have very powerful machine (Macbook Air with i5 and 4 GB of RAM), but the most I could accept was running the script over one night, not multiple days.

Thanks to some clever tricks, I was able to decrease this running time to a few hours. This post will explain the first step to achieve good data processing performance – choosing right library/framework for your dataset.

The graph below shows result of my experiment (details below), calculated as processing speed measured against processing speed of pure Python.

Python vs Numpy vs Pandas

As you can see, Numpy performance is several times bigger than Pandas performance. I personally love Pandas for simplifying many tedious data science tasks, and I use it wherever I can. But if the expected processing time spans for more than many hours, then, with regret, I change Pandas to Numpy.

I am very aware that the actual performance may vary significantly, depending on a task and type of processing. So please, treat these result as indicative only. There is no single test that can shown “overall” comparison of performance for any set of software tools.

Posted on July 15, 2017 by

see full post @ http://machinelearningexp.com/data-science-performance-of-python-vs-pandas-vs-numpy/

 

 

Microsoft’s new iPhone app narrates the world for blind people

Microsoft Seeing AI.jpg

“Microsoft has released Seeing AI — a smartphone app that uses computer vision to describe the world for the visually impaired. With the app downloaded, the users can point their phone’s camera at a person and it’ll say who they are and how they’re feeling. They can also point it at a product and it’ll tell them what it is. All of this is done using artificial intelligence that runs locally on their phone”…

https://www.theverge.com/2017/7/12/15958174/microsoft-ai-seeing-app-blind-ios

Webinar: Parallelize R Code Using Apache® Spark™ on August 15th, 2017

R is the latest language added to Apache Spark, and the SparkR API is slightly different from PySpark. SparkR’s evolving interface to Apache Spark offers a wide range of APIs and capabilities to Data Scientists and Statisticians. With the release of Spark 2.0, and subsequent releases, the R API officially supports executing user code on distributed data. This is done primarily through a family of apply() functions.

In this Data Science Central webinar, we will explore the following:
• Provide an overview of this new functionality in SparkR
• Show how to use this API with some changes to regular code with apply()
• Focus on how to correctly use this API to parallelize existing R packages
• Consider performance and examine correctness when using the apply() family of functions in SparkR

Speaker:
Hossein Falaki, Software Engineer — Databricks Inc.

Hosted by: Bill Vorhies, Editorial Director — Data Science Central
Title: Parallelize R Code Using Apache® Spark™
Date: Tuesday, August 15th, 2017
Time: 9:00 AM – 10:00 AM PDT

http://newsletter.datasciencecentral.com/click.html?x=a62e&lc=NaT&mc=j&s=sCt&u=F&y=W&