Second version of HoloLens HPU will incorporate AI coprocessor for implementing DNNs

HPU_2.0_1260x539-1024x438.png

Posted July 23, 2017 | by Microsoft Research Blog

By Marc Pollefeys, Director of Science, HoloLens

It is not an exaggeration to say that deep learning has taken the world of computer vision, and many other recognition tasks, by storm. Many of the most difficult recognition problems have seen gains over the past few years that are astonishing.

Although we have seen large improvements in the accuracy of recognition as a result of Deep Neural Networks (DNNs), deep learning approaches have two well-known challenges: they require large amounts of labelled data for training, and they require a type of compute that is not amenable to current general purpose processor/memory architectures. Some companies have responded with architectures designed to address the particular type of massively parallel compute required for DNNs, including our own use of FPGAs, for example, but to date these approaches have primarily enhanced existing cloud computing fabrics.

But I work on HoloLens, and in HoloLens, we’re in the business of making untethered mixed reality devices. We put the battery on your head, in addition to the compute, the sensors, and the display. Any compute we want to run locally for low-latency, which you need for things like hand-tracking, has to run off the same battery that powers everything else. So what do you do?

You create custom silicon to do it.

First, a bit of background. HoloLens contains a custom multiprocessor called the Holographic Processing Unit, or HPU. It is responsible for processing the information coming from all of the on-board sensors, including Microsoft’s custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera. The HPU is part of what makes HoloLens the world’s first–and still only–fully self-contained holographic computer.

Today, Harry Shum, executive vice president of our Artificial Intelligence and Research Group, announced in a keynote speech at CVPR 2017, that the second version of the HPU, currently under development, will incorporate an AI coprocessor to natively and flexibly implement DNNs. The chip supports a wide variety of layer types, fully programmable by us. Harry showed an early spin of the second version of the HPU running live code implementing hand segmentation.

The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery. This is just one example of the new capabilities we are developing for HoloLens, and is the kind of thing you can do when you have the willingness and capacity to invest for the long term, as Microsoft has done throughout its history. And this is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent. Mixed reality and artificial intelligence represent the future of computing, and we’re excited to be advancing this frontier.

Source: https://www.microsoft.com/en-us/research/blog/second-version-hololens-hpu-will-incorporate-ai-coprocessor-implementing-dnns/

AI for security: Microsoft Security Risk Detection makes debut

Full details at: – https://blogs.microsoft.com/next/2017/07/21/ai-for-security-microsoft-security-risk-detection-makes-debut/

Microsoft is making a cloud service that uses artificial intelligence to track down bugs in software generally available, and it will begin offering a preview version of the tool for Linux users as well.

Microsoft Security Risk Detection, previously known as Project Springfield, is a cloud-based tool that developers can use to look for bugs and other security vulnerabilities in the software they are preparing to release or use. The tool is designed to catch the vulnerabilities before the software goes out the door, saving companies the heartache of having to patch a bug, deal with crashes or respond to an attack after it has been released.

New M Functionality And Behaviour In Power BI Custom Data Connectors

Chris Webb's BI Blog

Over the past few weeks I’ve spent some time playing around with Power BI custom data connectors and while I don’t have anything to share publicly yet (other people are way ahead of me in this respect – see the work of Igor Cotruta, Miguel Escobar and Kasper de Jonge among others) I have learned some interesting things that are worth blogging about.

First of all, the data privacy rules around combining data from different data sources do not apply in custom data connector code. As the docs say here:

Data combination checks do not occur when accessing multiple data sources from within an extension. Since all data source calls made from within the extension inherit the same authorization context, it is assumed they are “safe” to combine. Your extension will always be treated as a single data source when it comes to data combination rules. Users would…

View original post 339 more words

Azure SQL Data Warehouse: Troubleshoot with the Resource Health check

Azure DW Resource HealthAzure DW Resource Health2 New update for Azure SQL Data Warehouse…

Reduce troubleshooting time with the upgraded Resource Health check for SQL Data Warehouse.

This upgrade considers the health status of all components of the SQL Data Warehouse architecture, which includes each SQL database distribution and the SQL Data Warehouse engine on each compute node. Login and heartbeat signals of each component are emitted at least once every 2 minutes, providing you a low-latency, holistic view of the health status of your data warehouse. If your instance is Unavailable, we will provide the reason along with recommended actions that you should perform.

The Resource Health check can detect unavailability reasons, such as when your instance is pausing, scaling, or upgrading. This feature also detects when there are any connection issues, whether they are user connections or inner SQL database connections.

You check the health of SQL Data Warehouse by signing in to the Azure portal and clicking the Resource Health blade.

Source: – https://azure.microsoft.com/en-us/updates/azure-sql-data-warehouse-troubleshoot-with-the-resource-health-check/

Free course on Deep Learning for Self-Driving Cars

self drive cars.pngA free course and introduction to deep learning through the applied task of building a self-driving car. Taught by Lex Fridman.

Visit http://selfdrivingcars.mit.edu/ for full details of “MIT 6.S094: Deep Learning for Self-Driving Cars“.

RESTful interactions with Azure Cosmos DB resources using the DocumentDB API, CRUD operations, API for MongoDB and additional quick starts

Azure Cosmos DB is a globally distributed system that supports the document, graph, and key-value data models which Microsoft have classified as a multi-model database service for mission-critical systems.

It also supports both the API for MongoDB and the DocumentDB API for creating, querying, and managing resources.

If you would like to understand how to answer any of the following questions: –

  • How do the standard HTTP methods work with Azure Cosmos DB resources?
  • How do I create a new resource using POST?
  • How do I register a stored procedure using POST?
  • How does Azure Cosmos DB support concurrency control?
  • What are the connectivity options for HTTPS and TCP?
Cosmos DB - interactions-with-resources2

Interaction model using the standard HTTP methods

Then take a look at Azure Cosmos DB REST API for full details first published on 18th July 2017 which covers these topics.


If interested in performing CRUD operations using REST, see Common tasks using the Azure Cosmos DB REST API.


If interested in performing CRUD operations using C# and REST, see the REST from .NET Sample on GitHub which can help you out.


If interested in more details of the MongoDB API, then see Introduction to Azure Cosmos DB: API for MongoDB which covers the benefits of using Azure Cosmos DB for MongoDB applications.

cosmosdb-mongodb

MongoDB wire protocol


… and finally if looking for help getting started then the following MongoDB quick starts will help you out: –

and also: –

 

Data Science: Performance of Python vs Pandas vs Numpy

Re-post from http://machinelearningexp.com/data-science-performance-of-python-vs-pandas-vs-numpy/

Speed and time is a key factor for any Data Scientist. In business, you do not usually work with toy datasets having thousands of samples. It is more likely that your datasets will contain millions or hundreds of millions samples. Customer orders, web logs, billing events, stock prices – datasets now are huge.

I assume you do not want to spend hours or days, waiting for your data processing to complete. The biggest dataset I worked with so far contained over 30 million of records. When I run my data processing script the first time for this dataset, estimated time to complete was around 4 days! I do not have very powerful machine (Macbook Air with i5 and 4 GB of RAM), but the most I could accept was running the script over one night, not multiple days.

Thanks to some clever tricks, I was able to decrease this running time to a few hours. This post will explain the first step to achieve good data processing performance – choosing right library/framework for your dataset.

The graph below shows result of my experiment (details below), calculated as processing speed measured against processing speed of pure Python.

Python vs Numpy vs Pandas

As you can see, Numpy performance is several times bigger than Pandas performance. I personally love Pandas for simplifying many tedious data science tasks, and I use it wherever I can. But if the expected processing time spans for more than many hours, then, with regret, I change Pandas to Numpy.

I am very aware that the actual performance may vary significantly, depending on a task and type of processing. So please, treat these result as indicative only. There is no single test that can shown “overall” comparison of performance for any set of software tools.

Posted on July 15, 2017 by

see full post @ http://machinelearningexp.com/data-science-performance-of-python-vs-pandas-vs-numpy/

 

 

Amazon brings .Net Core support to AWS Cloud

aws.jpg

Re-post from http://opensourceforu.com/2017/07/amazon-brings-net-core-support-aws-cloud/

Encouraging developers to massively build cross-platform applications, Amazon has added .Net Core support to its AWS Cloud services. The series that has been upgraded with the new support includes the AWS CodeStar and AWS CloudBuild services.

“The support for .Net Core in AWS CodeStar and AWS CodeBuild opens the door for .Net developers to take advantage of the benefits of Continuous Integration and Delivery when building .Net based solutions on AWS,” said Tara Walker, technical evangelist, Amazon Web Services (AWS), in a statement.

The AWS team launched the CodeStar service back in April for Amazon EC2, AWS Elastic Beanstalk and AWS Lambda projects using five programming languages, including JavaScript, Java, Python, Ruby and PHP. Though the original list of supported languages was covering a large part, Amazon has now planned to target developers on Microsoft’s Azure by enabling .Net Core support.

Deploy code on Amazon EC2 and AWS Lambda

Developers can leverage the latest support to build and deploy their .Net Core application code to both Amazon EC2 and AWS Lambda. This ability comes through the CodeBuild service that brings two new project templates to AWS CodeStar for .Net Core applications. Also, there is sample code and a full software development toolchain to ease the development.

Importantly, the presence of Visual Studio 2017 is required alongside the AWS Toolkit for Visual Studio 2017 to start building .Net Core applications for Amazon’s cloud solution. You can also deploy your existing .Net Core code enable your applications on AWS.

by  on July 13, 2017