MEAN.js with Cosmos DB on Azure

(a YouTube series by John Papa)

Cosmos DB is of significant interest to myself for projects I have been engaged in for the past couple of years which use MongoDB and MEAN in several ways. Scaling for us has always been a bit of a pain with MongoDB, and Cosmos DB on Azure looks to be relieving a lot of the headaches we have had.

MEAN stands for MongoDB, Express, Angular and Node.

I am not the author of these – this is a reference list to a YouTube series by John Papa introducing MEAN with Cosmos DB on Azure. I would normally just link directly to the creators blog or post for a series such as this, but it seems to be offline just now so I thought I would share a full list of current videos here – hopefully the original link will work again soon – which is https://johnpapa.net/angular-cosmosdb-1/.


MEAN.js with Cosmos DB – Part 1: Introduction

John builds a lot of apps with MongoDB, Express, Angular and Node (MEAN). MongoDB just works so well with these, but recently he has been using Cosmos DB on Azure in its place because it’s easy to use, scale, is super fast, and he does not have to change how he codes.


MEAN.js with Cosmos DB – Part 2: Creating the Node.js and Express App

Creating a Node.js and Express App along with the Angular CLI. Then create a web API endpoint and try it out.


MEAN.js with Cosmos DB – Part 3: Angular and Express APIs

The A in MEAN stands for Angular. This video shows how to build an Angular UI that talks to the Express API, with GET, POST, PUT, and DELETE.


MEAN.js with Cosmos DB – Part 4: Creating and Deploying Cosmos DB

Using the Azure CLI, to create the Cosmos DB account to represent a MongoDB model database and deploy it to Azure. Then view what we created in the Azure portal.


MEAN.js with Cosmos DB – Part 5: Querying Cosmos DB

How to connect to the MongoDB database with Azure Cosmos DB, using Mongoose, and query it for data.

You can subscribe to John’s YouTube series at https://www.youtube.com/playlist?list=PLbnXt_I6OfBWU9JiDNewZm11-7eFQf70M or follow him on twitter @John_Papa

Second version of HoloLens HPU will incorporate AI coprocessor for implementing DNNs

HPU_2.0_1260x539-1024x438.png

Posted July 23, 2017 | by Microsoft Research Blog

By Marc Pollefeys, Director of Science, HoloLens

It is not an exaggeration to say that deep learning has taken the world of computer vision, and many other recognition tasks, by storm. Many of the most difficult recognition problems have seen gains over the past few years that are astonishing.

Although we have seen large improvements in the accuracy of recognition as a result of Deep Neural Networks (DNNs), deep learning approaches have two well-known challenges: they require large amounts of labelled data for training, and they require a type of compute that is not amenable to current general purpose processor/memory architectures. Some companies have responded with architectures designed to address the particular type of massively parallel compute required for DNNs, including our own use of FPGAs, for example, but to date these approaches have primarily enhanced existing cloud computing fabrics.

But I work on HoloLens, and in HoloLens, we’re in the business of making untethered mixed reality devices. We put the battery on your head, in addition to the compute, the sensors, and the display. Any compute we want to run locally for low-latency, which you need for things like hand-tracking, has to run off the same battery that powers everything else. So what do you do?

You create custom silicon to do it.

First, a bit of background. HoloLens contains a custom multiprocessor called the Holographic Processing Unit, or HPU. It is responsible for processing the information coming from all of the on-board sensors, including Microsoft’s custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera. The HPU is part of what makes HoloLens the world’s first–and still only–fully self-contained holographic computer.

Today, Harry Shum, executive vice president of our Artificial Intelligence and Research Group, announced in a keynote speech at CVPR 2017, that the second version of the HPU, currently under development, will incorporate an AI coprocessor to natively and flexibly implement DNNs. The chip supports a wide variety of layer types, fully programmable by us. Harry showed an early spin of the second version of the HPU running live code implementing hand segmentation.

The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery. This is just one example of the new capabilities we are developing for HoloLens, and is the kind of thing you can do when you have the willingness and capacity to invest for the long term, as Microsoft has done throughout its history. And this is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent. Mixed reality and artificial intelligence represent the future of computing, and we’re excited to be advancing this frontier.

Source: https://www.microsoft.com/en-us/research/blog/second-version-hololens-hpu-will-incorporate-ai-coprocessor-implementing-dnns/

AI for security: Microsoft Security Risk Detection makes debut

Full details at: – https://blogs.microsoft.com/next/2017/07/21/ai-for-security-microsoft-security-risk-detection-makes-debut/

Microsoft is making a cloud service that uses artificial intelligence to track down bugs in software generally available, and it will begin offering a preview version of the tool for Linux users as well.

Microsoft Security Risk Detection, previously known as Project Springfield, is a cloud-based tool that developers can use to look for bugs and other security vulnerabilities in the software they are preparing to release or use. The tool is designed to catch the vulnerabilities before the software goes out the door, saving companies the heartache of having to patch a bug, deal with crashes or respond to an attack after it has been released.

Amazon brings .Net Core support to AWS Cloud

aws.jpg

Re-post from http://opensourceforu.com/2017/07/amazon-brings-net-core-support-aws-cloud/

Encouraging developers to massively build cross-platform applications, Amazon has added .Net Core support to its AWS Cloud services. The series that has been upgraded with the new support includes the AWS CodeStar and AWS CloudBuild services.

“The support for .Net Core in AWS CodeStar and AWS CodeBuild opens the door for .Net developers to take advantage of the benefits of Continuous Integration and Delivery when building .Net based solutions on AWS,” said Tara Walker, technical evangelist, Amazon Web Services (AWS), in a statement.

The AWS team launched the CodeStar service back in April for Amazon EC2, AWS Elastic Beanstalk and AWS Lambda projects using five programming languages, including JavaScript, Java, Python, Ruby and PHP. Though the original list of supported languages was covering a large part, Amazon has now planned to target developers on Microsoft’s Azure by enabling .Net Core support.

Deploy code on Amazon EC2 and AWS Lambda

Developers can leverage the latest support to build and deploy their .Net Core application code to both Amazon EC2 and AWS Lambda. This ability comes through the CodeBuild service that brings two new project templates to AWS CodeStar for .Net Core applications. Also, there is sample code and a full software development toolchain to ease the development.

Importantly, the presence of Visual Studio 2017 is required alongside the AWS Toolkit for Visual Studio 2017 to start building .Net Core applications for Amazon’s cloud solution. You can also deploy your existing .Net Core code enable your applications on AWS.

by  on July 13, 2017

 

Getting Cortana if it is not visible or blocked for your region on Windows Phone 8.1

wp_ss_20141220_0008You can still get Cortana, even if your carrier has block/not rolled it out, provided you can do the manual updates.

Worth noting that some carriers will intentionally block the update for a while with phones already out there based on your regional settings, this is so they can sell the new ones with it enabled as a selling point. Hence the 630 got it before you could update the 920… but as I mention you can still get it, provided you can upgrade your phone to 8.1 and relatively recent updates, newer the updates the newer the version you will get and the more “languages and accents will be supported”.

Also worth knowing – you might already have it if you are the latest update, it might be that it is just not switched on, default is “off”… On most older phones, including 920, it is still classed as a “Beta” application which replaces the existing speech support built into those devices since 8.0…

Finding it if you already have it: –

wp_ss_20141220_0001wp_ss_20141220_0002open “Settings”

  • you will be on the default tab “system”, swipe to the side so you are on “applications”, if it is available with your current configuration, it will be listed. 
  • Click “Cortana”.
  • Turn it on.
  • Restart the phone if required.
  • Hold down search button to start it.

If blocked or not visible – getting it if you do not have it: –

This varies somewhat depending on your device and update availability… easiest thing to do is change everything to “English (United States)” turn on Cortana and then try adjusting things back to your region one by one… when Cortana stops working change whatever you last changed back to “English (United States)”

The three areas to change are “Region”, “Language” and “Speech”. Then you need to turn it on.

Cortana will only work if “Language” and “Speech” match and Cortana is available for that combination.

System Settings   System Settings

Here I have it working with both English UK and US note that “Language” matches “Speech”

open “Settings” 

  1. Region SettingsUnder the default tab “system” scroll down and go into “Region”.
  2. Change “country/region” to “United States” (if Cortana not available to you).
  3. Change “Regional format” to <your desired format> (for me “English (United Kingdom)”).  
    • This preserves your currency and date format.
  4. Click on “Restart Phone”.
  5. Check for any updates, and install them, restart as required.
    • This will get Cortana and other updates if not already installed.
       
  6. Language SettingsUnder the default tab “system” scroll down and go into “Language”.
  7. Change the language to match the region setting (“English (United States)”).
  8. Click on “Restart Phone”.
  9. Next bit differs depending on whether Cortana is available in your local language version… including the variations in English accent and pronunciation.
    • If it is, available in your language/accent.
      1. from within “Settings”.
      2. System Applicationsif on “system” tab, swipe to the side so you are on “applications”
      3. if available you will see Cortana at or near the top.
    • If it is not visible within “applications” tab (within “Settings”)
      1. Go back to the “system” tab.
      2. Scroll down to to “speech”.
      3. Go into “speech” and change the “Speech language” to “English (United States)” … or another one that is currently supported by cortana…

        System SettingsSystem SettingsNote: Depending on your current phone setup, you may need to download, then go back into this and reselect it to get it to install, follow the prompts, reboot as required and then check for any phone updates, which will get the updated speech package which includes cortana.

      4. Make sure “speech” matches “language”.
      5. System Application SettingsCortana SettingsOnce done with all the updates/install of language packs, as above, go into settings, swipe to side to get “applications” tab… and you will see Cortana listed… open it and switch it on.
      6. Restart phone if required, hold down search button an say hello to Cortana.

Self Service BI within Manufacturing #SQLSaturdayEdinburgh #SQLPASS Presentation (#SQLSat)

SQL Saturday

Edinburgh First Conference CentreDuring mid April I was approached by Microsoft (UK) and asked if I would do a presentation at the Microsoft “Accelerate Your Insights” one day conference on the 1st of May 2014. Though hesitant and somewhat nervous about the prospect, as I had never spoken in public… I agreed and prepared a presentation.

The presentation was related to the recent Case Study I had the pleasure of being involved in through my employer, Jabil (@JabilCircuitInc). It would focus on how, at Jabil, we have progressed though the various backend SQL Server infrastructures offered by Microsoft over recent years and how we are using new technologies and features to enable BI delivery to our employees via production systems.

As a direct result of the presentation at Microsoft’s UK headquarters (Reading, UK) I was also invited to speak at a SQL Saturday (SQL PASS community event) being held in Edinburgh on 14th June 2014 at Edinburgh University Conference Centre.

Though hitting a bit of a technical snag with my work laptop, with less than 2 minutes to my presentation: –

Image

I quickly switched to my personal Surface Pro, which by pure chance I had decided to grab as I was leaving in the morning… had only took it so I had something light to play with between session. Just as well I did – quick switch, download of presentation from cloud storage and was good to go minus my demos.

Overall was able to buffer out the presentation, taking about several other aspects and areas we are working with SQL 2014 and BI… had several questions so am taking from that that audience engagement was good. Hopefully all that attended my presentation took away something that they did not know or at least found it useful.

Presentation can be downloaded both the SQL Saturday website and from here: –

Image

Thanks to Jen Stirrup (@JenStirrup) for the invitation to speak and arranging the great free training event; hope to be invited back in the future.

#SQLPASS #SQLSat #SQLSatEdinburgh #SQLSat281

Full case study @ http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=710000004223

Global Firm Takes an Evolutionary Leap in Data Management with Self-Service BI (Case Study)

A case study I was involved in just got published on Microsoft.com.

Over the past few years a lot of the work I have been involved in has been subject to NDAs, including this work with Microsoft (via my employer). Hence been unable to blog about my work or any of the great features of SQL Server 2014 or the Power BI suite of products.

Over the past year as part of the case study we were given advance access to SQL Server 2014 builds, Power BI and enhanced features of SharePoint. We also had assistance and regular contact with the SQL development team and Power BI guys.

As direct result of my participation I was lucky enough to enjoy a few trips to the USA, including to Seattle, Charlotte (for SQL PASS 2013 conference) and Tampa; making 2013 a very enjoyable and educational year for me 🙂 

Business intelligence (BI) information is only valuable when the right users can discover, analyze, use and share it with others—and all in a timely manner. Current technologies produce data at overwhelming rates, often faster than business users can analyze it, and the bottleneck is frequently the time that it takes to generate useful and impactful reports. At US-based supply chain management giant Jabil, as in many enterprises, data analysis has long been a time-consuming and intensive collaboration between the business groups and IT, creating customized reports whose information, by the time it’s used, is already growing stale. With its new solution built on Microsoft SQL Server 2014 and SQL Server 2014 Power View, Jabil users can create their own reports in minutes from business critical data sources using Microsoft Excel, with IT providing training and guidance—freeing up time to work on strategic projects.

Full case study @ http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=710000004223

Download PDF of Case Study