Our solution included in Microsoft Ignite 2017 Keynote

Azure Cosmos DB, Azure DW, Machine Leaning, Deep Learning, Neural Networks, TensorFlow, SQL Server, ASP.NET Core… are just a few of the components that make up one of the solutions we are currently developing.

Have been under a social media embargo, until today, but now that the Microsoft Ignite 2017 keynote has taken place, I am able to proudly say that the solution our team has been working on for some time was part of the Keynote addresses.

During the second keynote lead by Scott Guthrie, Danielle Dean a Data Scientist Lead discussed at a high level, one of the solutions we are developing at Jabil, which involves advanced image recognition of circuit board issues. The keynote focused in on the context of the solutions data science portion and introduced the new Azure Machine Learning Workbench to the packed audience.

Tomorrow morning there is a session – “Using big data, the cloud, and AI to enable intelligence at scale” (Tuesday, September 26, from 9:00 AM to 10:15 AM, in Hyatt Regency Windermere X)… during which we will be going into a bit more detail, and the guys at Microsoft will be expanding on the new AI and Big Data machine learning capabilities (session details via this link).

Visual Studio 2017 version 15.3 Release Notes

Release Date: August 18, 2017 – Visual Studio 2017 version 15.3.1

Issues Fixed in August 18, 2017 Release

These are the customer-reported issues addressed in this version:


Summary: What’s New in this Release

  • Accessibility Improvements make Visual Studio more accessible than ever.
  • Azure Function Tools are included in the Azure development workload. You can develop Azure Function applications locally and publish directly to Azure.
  • You can now build applications in Visual Studio 2017 that run on Azure Stack and government clouds, like Azure in China.
  • We improved .NET Core development support for .NET Core 2.0, and Windows Nano Server containers.
  • In Visual Studio IDE, we improved Sign In and Identity, the start page, Lightweight Solution Load, and setup CLI. We also improved refactoring, code generation and Quick Actions.
  • The Visual Studio Editor has better accessibility due to the new ‘Blue (Extra Contrast)’ theme and improved screen reader support.
  • We improved the Debugger and diagnostics experience. This includes Point and Click to Set Next Statement. We’ve also refreshed all nested values in variable window, and made Open Folder debugging improvements.
  • Xamarin has a new standalone editor for editing app entitlements.
  • The Open Folder and CMake Tooling experience is updated. You can now use CMake 3.8.
  • We made improvements to the IntelliSense engine, and to the project and the code wizards for C++ Language Services.
  • Visual C++ Toolset supports command-prompt initialization targeting.
  • We added the ability to use C# 7.1 Language features.
  • You can install TypeScript versions independent of Visual Studio updates.
  • We added support for Node 8 debugging.
  • NuGet has added support for new TFMs (netcoreapp2.0, netstandard2.0, Tizen), Semantic Versioning 2.0.0, and MSBuild integration of NuGet warnings and errors.
  • Visual Studio now offers .NET Framework 4.7 development tools to supported platforms with 4.7 runtime included.
  • We added clusters of related events to the search query results in the Application Insights Search tool.
  • We improved syntax support for SQL Server 2016 in Redgate SQL Search.
  • We enabled support for Microsoft Graph APIs in Connected Services.

Read more at https://www.visualstudio.com/en-gb/news/releasenotes/vs2017-relnotes#15.3.26730.08

 

.NET Core 2.0 and ASP.NET Core 2.0 Released

Been busy past couple of weeks, but if like me you are catching up… on 14th Aug, Microsoft released .NET Core 2.0, including ASP.NET Core 2.0

.NET Core 2.0

.NET and C# – Get Started in 10 Minutes

ASP.NET Core 2.0

This release features compatibility with .NET Core 2.0, tooling support in Visual Studio 2017 version 15.3, and the new Razor Pages user-interface design paradigm.  For a full list of updates, you can read the release notes and you can check the list of changed items in the ASP.NET Announcements GitHub repository for a list of changes from previous versions of ASP.NET Core.  The latest SDK and tools can be downloaded from https://dot.net/core.

Read more at https://blogs.msdn.microsoft.com/webdev/2017/08/14/announcing-asp-net-core-2-0/

 

 

.NET Conf, a free virtual event for developers

Are you ready to learn all about .NET? .NET Conf 19th to 21st September 2017 is a free virtual conference. #dotnetconf – http://www.dotnetconf.net/


More Details…

The .NET Conf is a free, 3 day virtual developer event co-organized by the .NET community and Microsoft. Some of the speakers lined up so far: –

Scott Hunter

Scott Hunter – Director of Program Management, .NET

Kasey Uhlenhuth

Kasey Uhlenhuth – Program Manager, .NET

Mads Torgersen

Mads Torgersen – C# Language Designer

Mikayla Hutchinson

Mikayla Hutchinson – Principal Program Manager, Xamarin

Scott Hanselman

Scott Hanselman – Principal Program Manager, .NET

What’s in store for you?

“Over the course of the three days you have a wide selection of live sessions that feature speakers from the community and .NET product teams. These are the experts in their field and it is a chance to learn, ask questions live, and get inspired for your next software project.

You will learn to build for web, mobile, desktop, games, services, libraries and more for a variety of platforms and devices all with .NET. We have sessions for everyone, no matter if you are just beginning or are a seasoned engineer. We’ll have presentations on .NET Core and ASP.NET Core, C#, F#, Roslyn, Visual Studio, Xamarin, and much more.”

Check out http://www.dotnetconf.net/ for more details…

Creating custom Power BI visuals? (… and adding interactivity to Power BI dashboard)

Power BI VisualIf you are wondering how to create custom visuals for Power BI? Then, handily, there is an increasing number of open source samples and visuals becoming available.

Once such visualisation is the Drilldown Player, release by Microsoft as Open Source, and built in conjunction with their partner Gramener (http://gramener.com).

You can get the code from GitHub @ https://github.com/Microsoft/powerbi-visuals-drilldown-player.

You can get the compiled visual @ https://store.office.com/en-us/app.aspx?assetid=WA104381035&sourcecorrid=bde0be33-be77-400c-a17c-19849a52e1f5&ui=en-US&rs=en-US&ad=US&appredirect=false

Chris Webb recently shared a blog post about using this visual to add interactivity… Creating Animated Reports In Power BI With The Drilldown Player Custom Visual

Chris Webb's BI Blog

Last week I had the chance to do something I have not done before: build a Power BI report to be displayed on a big screen hanging on a wall. To make up for the loss of user interactivity, I used the new Drilldown Player custom visual to cycle through different selections and display a new slice of data every few seconds; Devin Knight’s blog post here has a great summary of how to use it. However I wasn’t happy about the look of the Drilldown Player visual in this particular report: the play/stop/pause buttons aren’t much use if you can’t click on them and the visual doesn’t show all of the values that it is cycling through. As a result I hid the visual behind another one and came up with a different way of displaying the currently-displayed selection.

Here’s a simple example of what I did. Imagine you…

View original post 271 more words

Power BI custom visual from Visio

Visualize business process workflows, real-world layouts like factory floor plans, network diagrams, organization structures or any illustration created in Microsoft Visio and easily connect it to Power BI data. Contextually represent Power BI data as colours or text on Visio diagrams. Now drive Operational Intelligence effectively using Visio custom visual.

MEAN.js with Cosmos DB on Azure

(a YouTube series by John Papa)

Cosmos DB is of significant interest to myself for projects I have been engaged in for the past couple of years which use MongoDB and MEAN in several ways. Scaling for us has always been a bit of a pain with MongoDB, and Cosmos DB on Azure looks to be relieving a lot of the headaches we have had.

MEAN stands for MongoDB, Express, Angular and Node.

I am not the author of these – this is a reference list to a YouTube series by John Papa introducing MEAN with Cosmos DB on Azure. I would normally just link directly to the creators blog or post for a series such as this, but it seems to be offline just now so I thought I would share a full list of current videos here – hopefully the original link will work again soon – which is https://johnpapa.net/angular-cosmosdb-1/.


MEAN.js with Cosmos DB – Part 1: Introduction

John builds a lot of apps with MongoDB, Express, Angular and Node (MEAN). MongoDB just works so well with these, but recently he has been using Cosmos DB on Azure in its place because it’s easy to use, scale, is super fast, and he does not have to change how he codes.


MEAN.js with Cosmos DB – Part 2: Creating the Node.js and Express App

Creating a Node.js and Express App along with the Angular CLI. Then create a web API endpoint and try it out.


MEAN.js with Cosmos DB – Part 3: Angular and Express APIs

The A in MEAN stands for Angular. This video shows how to build an Angular UI that talks to the Express API, with GET, POST, PUT, and DELETE.


MEAN.js with Cosmos DB – Part 4: Creating and Deploying Cosmos DB

Using the Azure CLI, to create the Cosmos DB account to represent a MongoDB model database and deploy it to Azure. Then view what we created in the Azure portal.


MEAN.js with Cosmos DB – Part 5: Querying Cosmos DB

How to connect to the MongoDB database with Azure Cosmos DB, using Mongoose, and query it for data.

You can subscribe to John’s YouTube series at https://www.youtube.com/playlist?list=PLbnXt_I6OfBWU9JiDNewZm11-7eFQf70M or follow him on twitter @John_Papa

Second version of HoloLens HPU will incorporate AI coprocessor for implementing DNNs

HPU_2.0_1260x539-1024x438.png

Posted July 23, 2017 | by Microsoft Research Blog

By Marc Pollefeys, Director of Science, HoloLens

It is not an exaggeration to say that deep learning has taken the world of computer vision, and many other recognition tasks, by storm. Many of the most difficult recognition problems have seen gains over the past few years that are astonishing.

Although we have seen large improvements in the accuracy of recognition as a result of Deep Neural Networks (DNNs), deep learning approaches have two well-known challenges: they require large amounts of labelled data for training, and they require a type of compute that is not amenable to current general purpose processor/memory architectures. Some companies have responded with architectures designed to address the particular type of massively parallel compute required for DNNs, including our own use of FPGAs, for example, but to date these approaches have primarily enhanced existing cloud computing fabrics.

But I work on HoloLens, and in HoloLens, we’re in the business of making untethered mixed reality devices. We put the battery on your head, in addition to the compute, the sensors, and the display. Any compute we want to run locally for low-latency, which you need for things like hand-tracking, has to run off the same battery that powers everything else. So what do you do?

You create custom silicon to do it.

First, a bit of background. HoloLens contains a custom multiprocessor called the Holographic Processing Unit, or HPU. It is responsible for processing the information coming from all of the on-board sensors, including Microsoft’s custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera. The HPU is part of what makes HoloLens the world’s first–and still only–fully self-contained holographic computer.

Today, Harry Shum, executive vice president of our Artificial Intelligence and Research Group, announced in a keynote speech at CVPR 2017, that the second version of the HPU, currently under development, will incorporate an AI coprocessor to natively and flexibly implement DNNs. The chip supports a wide variety of layer types, fully programmable by us. Harry showed an early spin of the second version of the HPU running live code implementing hand segmentation.

The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery. This is just one example of the new capabilities we are developing for HoloLens, and is the kind of thing you can do when you have the willingness and capacity to invest for the long term, as Microsoft has done throughout its history. And this is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent. Mixed reality and artificial intelligence represent the future of computing, and we’re excited to be advancing this frontier.

Source: https://www.microsoft.com/en-us/research/blog/second-version-hololens-hpu-will-incorporate-ai-coprocessor-implementing-dnns/

AI for security: Microsoft Security Risk Detection makes debut

Full details at: – https://blogs.microsoft.com/next/2017/07/21/ai-for-security-microsoft-security-risk-detection-makes-debut/

Microsoft is making a cloud service that uses artificial intelligence to track down bugs in software generally available, and it will begin offering a preview version of the tool for Linux users as well.

Microsoft Security Risk Detection, previously known as Project Springfield, is a cloud-based tool that developers can use to look for bugs and other security vulnerabilities in the software they are preparing to release or use. The tool is designed to catch the vulnerabilities before the software goes out the door, saving companies the heartache of having to patch a bug, deal with crashes or respond to an attack after it has been released.