Creating custom Power BI visuals? (… and adding interactivity to Power BI dashboard)

Power BI VisualIf you are wondering how to create custom visuals for Power BI? Then, handily, there is an increasing number of open source samples and visuals becoming available.

Once such visualisation is the Drilldown Player, release by Microsoft as Open Source, and built in conjunction with their partner Gramener (http://gramener.com).

You can get the code from GitHub @ https://github.com/Microsoft/powerbi-visuals-drilldown-player.

You can get the compiled visual @ https://store.office.com/en-us/app.aspx?assetid=WA104381035&sourcecorrid=bde0be33-be77-400c-a17c-19849a52e1f5&ui=en-US&rs=en-US&ad=US&appredirect=false

Chris Webb recently shared a blog post about using this visual to add interactivity… Creating Animated Reports In Power BI With The Drilldown Player Custom Visual

Chris Webb's BI Blog

Last week I had the chance to do something I have not done before: build a Power BI report to be displayed on a big screen hanging on a wall. To make up for the loss of user interactivity, I used the new Drilldown Player custom visual to cycle through different selections and display a new slice of data every few seconds; Devin Knight’s blog post here has a great summary of how to use it. However I wasn’t happy about the look of the Drilldown Player visual in this particular report: the play/stop/pause buttons aren’t much use if you can’t click on them and the visual doesn’t show all of the values that it is cycling through. As a result I hid the visual behind another one and came up with a different way of displaying the currently-displayed selection.

Here’s a simple example of what I did. Imagine you…

View original post 271 more words

Development Workflows for Data Scientists • Free eBook

Development Workflows for Data Scientists - Free eBook.JPGEnabling Fast, Efficient, and Reproducible Results for Data Science • via GitHub

GitHub partnered with O’Reilly Media to examine how data science and analytics teams at several data-driven organizations are improving the way they define, enforce, and automate development workflows.

Download this complimentary book from: – https://resources.github.com/whitepapers/data-science/ 

Power BI custom visual from Visio

Visualize business process workflows, real-world layouts like factory floor plans, network diagrams, organization structures or any illustration created in Microsoft Visio and easily connect it to Power BI data. Contextually represent Power BI data as colours or text on Visio diagrams. Now drive Operational Intelligence effectively using Visio custom visual.

MEAN.js with Cosmos DB on Azure

(a YouTube series by John Papa)

Cosmos DB is of significant interest to myself for projects I have been engaged in for the past couple of years which use MongoDB and MEAN in several ways. Scaling for us has always been a bit of a pain with MongoDB, and Cosmos DB on Azure looks to be relieving a lot of the headaches we have had.

MEAN stands for MongoDB, Express, Angular and Node.

I am not the author of these – this is a reference list to a YouTube series by John Papa introducing MEAN with Cosmos DB on Azure. I would normally just link directly to the creators blog or post for a series such as this, but it seems to be offline just now so I thought I would share a full list of current videos here – hopefully the original link will work again soon – which is https://johnpapa.net/angular-cosmosdb-1/.


MEAN.js with Cosmos DB – Part 1: Introduction

John builds a lot of apps with MongoDB, Express, Angular and Node (MEAN). MongoDB just works so well with these, but recently he has been using Cosmos DB on Azure in its place because it’s easy to use, scale, is super fast, and he does not have to change how he codes.


MEAN.js with Cosmos DB – Part 2: Creating the Node.js and Express App

Creating a Node.js and Express App along with the Angular CLI. Then create a web API endpoint and try it out.


MEAN.js with Cosmos DB – Part 3: Angular and Express APIs

The A in MEAN stands for Angular. This video shows how to build an Angular UI that talks to the Express API, with GET, POST, PUT, and DELETE.


MEAN.js with Cosmos DB – Part 4: Creating and Deploying Cosmos DB

Using the Azure CLI, to create the Cosmos DB account to represent a MongoDB model database and deploy it to Azure. Then view what we created in the Azure portal.


MEAN.js with Cosmos DB – Part 5: Querying Cosmos DB

How to connect to the MongoDB database with Azure Cosmos DB, using Mongoose, and query it for data.

You can subscribe to John’s YouTube series at https://www.youtube.com/playlist?list=PLbnXt_I6OfBWU9JiDNewZm11-7eFQf70M or follow him on twitter @John_Papa

Second version of HoloLens HPU will incorporate AI coprocessor for implementing DNNs

HPU_2.0_1260x539-1024x438.png

Posted July 23, 2017 | by Microsoft Research Blog

By Marc Pollefeys, Director of Science, HoloLens

It is not an exaggeration to say that deep learning has taken the world of computer vision, and many other recognition tasks, by storm. Many of the most difficult recognition problems have seen gains over the past few years that are astonishing.

Although we have seen large improvements in the accuracy of recognition as a result of Deep Neural Networks (DNNs), deep learning approaches have two well-known challenges: they require large amounts of labelled data for training, and they require a type of compute that is not amenable to current general purpose processor/memory architectures. Some companies have responded with architectures designed to address the particular type of massively parallel compute required for DNNs, including our own use of FPGAs, for example, but to date these approaches have primarily enhanced existing cloud computing fabrics.

But I work on HoloLens, and in HoloLens, we’re in the business of making untethered mixed reality devices. We put the battery on your head, in addition to the compute, the sensors, and the display. Any compute we want to run locally for low-latency, which you need for things like hand-tracking, has to run off the same battery that powers everything else. So what do you do?

You create custom silicon to do it.

First, a bit of background. HoloLens contains a custom multiprocessor called the Holographic Processing Unit, or HPU. It is responsible for processing the information coming from all of the on-board sensors, including Microsoft’s custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera. The HPU is part of what makes HoloLens the world’s first–and still only–fully self-contained holographic computer.

Today, Harry Shum, executive vice president of our Artificial Intelligence and Research Group, announced in a keynote speech at CVPR 2017, that the second version of the HPU, currently under development, will incorporate an AI coprocessor to natively and flexibly implement DNNs. The chip supports a wide variety of layer types, fully programmable by us. Harry showed an early spin of the second version of the HPU running live code implementing hand segmentation.

The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery. This is just one example of the new capabilities we are developing for HoloLens, and is the kind of thing you can do when you have the willingness and capacity to invest for the long term, as Microsoft has done throughout its history. And this is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent. Mixed reality and artificial intelligence represent the future of computing, and we’re excited to be advancing this frontier.

Source: https://www.microsoft.com/en-us/research/blog/second-version-hololens-hpu-will-incorporate-ai-coprocessor-implementing-dnns/

AI for security: Microsoft Security Risk Detection makes debut

Full details at: – https://blogs.microsoft.com/next/2017/07/21/ai-for-security-microsoft-security-risk-detection-makes-debut/

Microsoft is making a cloud service that uses artificial intelligence to track down bugs in software generally available, and it will begin offering a preview version of the tool for Linux users as well.

Microsoft Security Risk Detection, previously known as Project Springfield, is a cloud-based tool that developers can use to look for bugs and other security vulnerabilities in the software they are preparing to release or use. The tool is designed to catch the vulnerabilities before the software goes out the door, saving companies the heartache of having to patch a bug, deal with crashes or respond to an attack after it has been released.