Updates to the myday student portal from Microsoft Education partner Collabco

Offering a wide choice of ‘click-through’ links to everything the student needs in order to organise and do their work on a daily basis, the myday student portal from Microsoft Education partner Collabco is a popular choice in Higher Education. When our guest writer Gerald Haigh spoke with Bishop Grosseteste University's Head of IT Richard Corn earlier in the year, it was clear to see the positive impact myday had had on not only students, but also the IT staff. We're now pleased to share details of the latest updates and improvements to myday, directly from the team at Collabco: Many UK institutions are looking to student portals as a way of improving student participation. Easily customisable for both the faculty and students, myday is a turnkey solution which can be deployed and functional across an institution in less than a month. This ‘quick start’ feature of myday is because Collabco are well aware of what institutions are looking for in the first instance. Hooking straight into the student experience, myday provides a good blend of organisation and information, including these essential features: Integration with VLEs such as Moodle, BlackBoard & Canvas with Single Sign On from myday Office 365 integration including calendar and email with easy, single sign-on access User personalisation – users can choose their own layout, theme and colours Social media integration - Twitter, Facebook – no need to follow or like Extensive targeting of content across the myday platform Multi-language support, great for Welsh institutions, as well as international students Notification platform – news, timetable changes or upcoming events all pushed to your mobile Extensible platform – develop your own myday apps! In addition to the impressive feature set, what makes myday an incredibly attractive option to HE institutions is the speed with which it can be deployed, as highlighted by Bishop Grosseteste University’s Head of IT – Richard Corn. Richard was impressed by the speed and efficiency of the installation, commenting that “The main implementation took only twenty-eight days.” More than just a simple a portal, myday offers all of the this functionality from a web browser or via a native app on your mobile phone delivering a consistent, targeted and personal experience on any device. As part of the Microsoft Partner network, myday is a trusted solution which has been used and proven to work far longer than many other solutions. If you'd like to find out more, you can get in touch directly with the team at Collabco who will happily provide you with access to their demo portal to see for yourself.   Please contact Rachel (rachel.pygott@collabco.co.uk) or James (james.dean@collabco.co.uk), or alternatively visit http://mydaycloud.com for more information.

Posted by on 30 July 2015 | 2:45 am

Microsoft Imagine Cup 2015 World Finals - Tag 3

Heute war der große Tag mit den Präsentationen aller 33 Teams im Finale des Imagine Cup 2015. Da das Team enCourage erst am Nachmittag seine Vorstellung hatte, war noch ein wenig Zeit für den letzten Schliff an Demos und Folien. Finalpräsentationen Um 14:30 war es dann soweit. Im Raum für die Kategorie „World Citizenship“ hatte das Team 10 Minuten Zeit die Jury von ihrem Projekt zu überzeugen. Das mit hochkarätigen Vertretern von u.a. Microsoft, Rotem Kreuz und Bill & Melinda Gates Foundation...(read more)

Posted by on 30 July 2015 | 1:21 am

Imagine Cup 2015 Day 3

月曜日から金曜日まで1週間に渡って開催される今年のImagine Cup 世界大会のすべてはこの3日目の部門審査に集約されているといっても過言ではありません。朝の8時から夕方の6時まで、それぞれの国と地域の予選を潜り抜けた全33チーム、113人の学生が決戦に挑みます。日本代表がノミネートされているイノベーション部門の12チームの審査は、ビジターセンターやカンパニーストアがあるMicrosoft CampusのBuilding 92 Odysseyルームで繰り広げられました。 部門最終審査の会場となったBuilding 92。米国マイクロソフト本社の中心部です。 展示会場から審査会場まで機材を運ぶ日本代表のScreenAIR。この時点でかなり緊張しています。 日本代表の出番は13:15から。平等性を保つために、機材の大きさや複雑さに関係なく各チームの準備時間は10分と決められており、カウントダウンを睨みながら急いで準備に取り掛かります。本番まで3分を残してセットアップが完了。全員で2度、深呼吸を繰り返して、緊張感をやわらげていよいよ本番。TechReadyに日本マイクロソフトから参加している同僚たちも応援に駆けつけてくれました。がんばれ!ScreenAIR!!あとは我々社員は見守るしかありません。 ついに始まったプレゼン本番。緊張は隠せないまでも、ScreenAIRにかける想いが伝わる情熱的な語り口が審査員の興味を引きます。 プレゼン時間はきっちり10分と定められていたため、あと30秒あれば説明しきれるところを止められてしまいましたが、これまでの練習の成果をバッチリ出せた素晴らしいプレゼンテーションだったと思います。 息つく暇もなくプロモーション用のビデオ撮影 彼らのユニフォームの背中に記されている「風」という日本語の文字は英語の「Wind」だよと耳打ちをしたら、意図を汲んで背中を向かせるディレクター 台湾のメディアから取材を受けるScreenAIR。ミニチュアモデルで風を受けるセンサーを説明します。 この後、16時から18時までマイクロソフト社員もメンターの先生も会場から閉め出されて、学生たちだけで審査員からの質疑応答に応えます。全て終えて聞いたところではなんとか審査員全員の質問には答えられたそうで、安心しました。イノベーション部門の審査員の1人で、Raspberry Piの創業者であり、現CEOであるEben Uptonにはいたく気に入られたようで、自身のTwitterで「Screen Feels Air presentation from @ZackeyKei & co at @Microsoft@imaginecup. Akihabara's next controller craze? https://www.youtube.com/watch?v=nZPl6GsR7U4」とつぶやいていました。審査に期待が持てます。 全てが終わって、Building 92のエントランスにあるマイクロソフトロゴの前で記念撮影 審査の結果は明日の朝、Awards Ceremonyでの発表となります。ここで部門賞を獲れれば、最終日にワシントン州コンベンション&トレードセンターで行われるWorld Championshipへの出場権が得られます。天命は尽くしたので、人事を待つ。今は全員そんな心境でそろそろ眠りにつきます。おやすみなさい。

Posted by on 30 July 2015 | 1:04 am

ALIX goes to Seattle [Hari Ketiga]: Presentasi, Demo dan Q&A, Souvenir, Showcase dan Istirahat

Hari ketiga ALIX di Seattle adalah hari penjurian, ketika ALIX melakukan presentasi, demo dan Q&A dengan juri. Penjurian: Presentasi Dengan bermodalkan sarapan berupa roti dingin, telur dan youghurt , tim ALIX sudah harus meluncur jam 7.15 pagi menuju lokasi penjurian. Mendapatkan urutan nomor 3 untuk World Citizenship, ALIX melakukan presentasi pada jam 8.50 pagi, selama 10 menit. Presentasi dilakukan di hadapan 5 orang juri, yaitu CIO Path, Direktur Gates Foundation, CIO Palang Merah Internasional, dan dua orang Direktur dari Microsoft. Presentasi ALIX berjalan lancar tanpa masalah, dan semua sesuai dengan latihan yang selama ini dilakukan. Setelah selesai presentasi, ALIX langsung menuju area Showcase untuk mempersiapkan booth. Penjurian: Demo dan Q&A Setelah penjurian selesai, mulai jam 10.40 sampai 12.30 semua booth tim dikunjungi oleh juri untuk melakukan demo dan tanya jawab selama 10 menit untuk masing-masing juri. Pertanyaan-pertanyaan dari juri sangat tajam dan menganalisis kelebihan dan kekurangan aplikasi Solidare. Sebuah pengalaman yang luar biasa dan tentunya akan sangat membantu pengembangan Solidare, terlepas dari hasil menang-kalah ini. Belanja Souvenir Setelah semua proses penjurian selesai, ALIX mengambil waktu untuk istirahat makan siang. Dengan beban besar sudah terselesaikan, ALIX menyempatkan diri mengunjungi Microsoft Visitor Center untuk melihat pameran teknologi-teknologi terkini dari Microsoft. Tentunya juga tidak lupa mendatangi Microsoft Company Store untuk membeli cindera mata, baik untuk pribadi maupun untuk oleh-oleh Showcase Mulai jam 6 sore, ALIX kembali berjaga di booth untuk menerima kunjungan dari pegawai Microsoft dan juga dari para peserta Imagine Cup 2015 lainnya. Banyak terjadi dikusi menarik, termasuk dengan beberapa WNI yang bekerja di Microsoft di Redmond! Istirahat Setelah showcase berakhir jam 8 malam, semua peserta Imagine Cup berkumpul untuk foto bersama. Setelahnya, semua kembali ke dorm dengan menggunakan bis. Tim ALIX langsung beristirahat dan memulihkan tenaga setelah beberapa hari kurang tidur.   Besok pagi pukul 8.30 (22.30 WIB), akan ada acara pengumuman pemenang Imagine Cup 2015 untuk setiap Competition. Doakan supaya ALIX mendapatkan hasil terbaik! Stay tuned!

Posted by on 30 July 2015 | 12:51 am

Windows 10 upgrade was uneventful, result is awesome

It was with some hesitation that I decided to click the upgrade button, to move from the Windows 8.1 environment I have grown accustomed to. The Windows 10 upgrade told me to “sit back and relax” … the last time I upgraded my OS I ended up re-installing and re-configuring the tools, widgets and applications I gathered over weeks to months. How can I possibly relax? I decided to get a coffee and go for a walk instead. When I came back I was welcomed by a ready laptop, logged on and have not yet found anything missing or non-functional … the upgrade experience was amazing. Well done! I immediately spotted three things I really love, other than my better half. The Start Menu is back, packed with features and an integrated start screen … love it! The Task View offers not only an overview of active tasks, but also the ability to create and switch between desktops. Now I can have a desktop for the Ranger scrum and planning events, a desktop for development and … this rocks! The weather forecast for Delta shows 30+ degrees Celsius days ahead of us … how can we ask for more? OK, back to Visual Studio ALM …

Posted by on 30 July 2015 | 12:00 am

ついにヴェールを脱いだ Visual Studio 2015 が登場

Visual Studio を知っている人も、昔使っていた人も、全然使ったことがない人もこんにちは、 というフレーズで始めた短期集中ニュースレター連動連載もついに最終回を迎えました。 ということで改めましてこんにちは、日本マイクロソフトプラットフォーム エバンジェリストの高橋です。 最終回は残念ですが、今回の連載を通じて Visual Studio のことを少し興味持ったよ、とか Visual Studio 2015インストールしてみました! という声をいただきました。うれしいです。「昔 Visual Studio 使っていたんですが、ものすごい進化しているんですね」や「まさか Android のアプリも作れるなんて」というどこかで聞いたような、でもうれしいコメントもいただきました。 Ya! Ya! Ya! ついに Visual Studio 2015 がやって来た そして、ようやく 7 月 20 日には Visual Studio 2015 が 正式にリリース しました。無償版や評価版をさっそくダウンロードしてお使いいただけます。 個人でお使いなら無償版の Visual Studio...(read more)

Posted by on 30 July 2015 | 12:00 am

How to make Windows 10 rock - Silly Competition Time

Yesterday our office celebrated the arrival of Windows 10 with jars of Windows 10 sugary confections. My colleagues here in Australia call it a lolly jar, and I’d call it seaside rock - it’s the kind of sweet that’s made with huge volumes of sugar, and then coloured and rolled, ending up with the words “I Windows 10” running through every sweet (and now you know ‘How to make Windows 10 rock’ ) Windows 10 Sweet Competition In the past, when we’ve launched new products I’ve often tried to get my hands on early copies and given them away in silly competitions on my blog or Twitter. But this time almost everybody is going to get a free Windows 10 upgrade anyway, so instead I collected 10 jars of sweets, and threw in my brand new Targus Bex laptop sleeve, and so we can still have a silly competition! I’ll give the prize for the closest guess for how many individual sweets there are in 10 jars of Windows 10 sweets by the 4PM Friday in Sydney. To enter, just tweet me with your guess - I’m @rayfleming on Twitter Tweet your answer here I’ll post them out to a winner in Australia first thing Monday (sorry folks, but I can’t post sweets abroad )

Posted by on 29 July 2015 | 10:24 pm

Windows 7 から 10 になって変わるアプリケーション の5つのポイント

Windows 10 は、アプリ開発者に対して、IoTデバイス、スマートフォン、タブレット、PC、Xboxなどの幅広いデバイスに対応した一元的な開発者プラットフォームを提供します。開発したWindows 10アプリは、Windows ストアを通じて、それらの幅広いデバイスに配布することができます。また、Windows 10のアプリ開発は、開発者が、これまで投資して身に付けたスキルを無駄にすることなく、役立てることができる様々なツールや機能が提供されます。ここでは、Windows 7 のアプリ開発者が Windows 10 のアプリ開発を行うにあたって、理解しておくべき、5つのポイントを紹介します。 ポイントその1:「アプリの配布:Windows ストア経由で安全に簡単に配布可能!」 Windows 10 では、どのようにアプリを配布するのでしょうか? Windows 7 では、開発したアプリは、CDやDVDなどのメディアやネットワーク経由のダウンロード提供などで、エンドユーザーに配布しました。Windows 10 では、Windows 7と同じ配布方法だけでなく、各種スマートフォンで利用されているオンラインアプリストアと同様に...(read more)

Posted by on 29 July 2015 | 8:00 pm

Configuring Skype for Business Websites IIS Log Directory on a range of Front End Servers

Hi All I often help out customers performing builds of a number of Skype Servers and so I often find myself trying to configure things for a bunch of servers with Powershell. One of my recent activities was to try and automate the configuration of the IIS Log Directory for the two Skype IIS Websites. It's pretty easy to do on the Front End itself but I wasn't too sure how to do it remotely against a bunch of Front Ends. One of the nice things about my lab environment is that all of my Skype Front Ends have in their name "-SF", and this is important as I am using an Active Directory query to find all servers matching that string. You of course need to be careful here as you might have other servers matching the string your Skype Servers are named with. If that is the case, that's fine just use the Get-Content cmdlet and put all your Front Ends in a .txt file you can import instead of running the Get-ADComputer cmdlet. Either way, the result is still the same, configuring IIS Log Folders for a bunch of servers. Why move? The main reason I tend to move the IIS Log folder off the C: drive is that these days I often see Skype Servers deployed in a virtual machines with a skinny C: drive and an additional 50-100GB secondary drive for data. As Skype is a pretty IIS heavy application, meaning the Skype/Lync client will generate a lot of web requests in IIS, it can mean that you have quite a lot of IIS Log Files (GBs). So in order to protect the C: drive from being impacted I often recommend reconfiguring the two Skype IIS Websites to have their Log Files outputted to the secondary data drive. So the good news is with the Invoke-Command cmdlet it is reasonably easy to execute a bunch of commands and initiate them against the remote machine. It's a lot like using Sysinternals PSExec to run commands against a remote machine, the main difference with invoke-command is that I am running Powershell commands. Below are the commands I used to reconfigure IIS Log files to the D: drive for all of my 6 *-SF* Skype for Business 2015 Front End Servers (make sure you have a D: before you run this :) ). #Skype Front End IIS $strScriptBlock= {   If (!(Test-Path "d:\Logs\Skype\IIS\External")){Md d:\Logs\Skype\IIS\External}If (!(Test-Path "d:\Logs\Skype\IIS\Internal")){Md d:\Logs\Skype\IIS\Internal}   import-module WebAdministration  Set-ItemProperty 'IIS:\Sites\Skype for Business Server External Web Site' -name logFile.directory -value 'd:\Logs\Skype\IIS\External' Set-ItemProperty 'IIS:\Sites\Skype for Business Server Internal Web Site' -name logFile.directory -value 'd:\Logs\Skype\IIS\Internal' } #Find all the *-SF* servers in Active Directory and then loop through each of them to configure IIS Logs. Use Get-Content serversfile.txt if you're Skype Servers aren't named with a unique string you can search for! cls;write-host "Skype IIS Log Files";Get-ADComputer -filter {name -like "*-SF*"} |sort-property dnshostname | %{ $strServer=$_.dnshostname; Write-host "Configuring IIS Log Files for $strServer!"#Check to see if the server is up by hitting the C$ share....If (Test-Path \\$strServer\c$) {  Invoke-Command -ComputerName $strServer -ScriptBlock $strScriptBlock }  Else { Write-host"$strServer is not online!" } } Now as you are interested in Skype deployments, no doubt you are interested in Office Web App 2013 (OWA) as well. OWA is an IIS application as well so you can also move the IIS Log Files for these servers as well. The same approach applies. (In this example all my Office Web App Servers were named with the unique string -OW). #Office Web App IIS $strScriptBlock= {   If (!(Test-Path "d:\Logs\IIS\OWA")){Md d:\Logs\IIS\OWA} import-module WebAdministration  Set-ItemProperty 'IIS:\Sites\HTTP809' -name logFile.directory -value 'd:\Logs\IIS\OWA\HTTP809';Set-ItemProperty 'IIS:\Sites\HTTP80' -name logFile.directory -value 'd:\Logs\IIS\OWA\HTTP80';   } cls;write-host "OWA IIS Log Files";Get-ADComputer -filter {name -like "*-OW*"} |sort-property dnshostname | %{ $strServer=$_.dnshostname; Write-host "Configuring IIS Log Files for $strServer!"#Check to see if the server is up by hitting the C$ share....If (Test-Path \\$strServer\c$) {  Invoke-Command -ComputerName $strServer -ScriptBlock $strScriptBlock }  Else { Write-host"$strServer is not online!" } } Now as usual, please feel free to use these commands in your environment but please test, test and test in a dev/test environment before running in Production. While the commands are relatively harmless, it is still important to test and ensure they work for your environment.   Happy Skypeíng   Steve

Posted by on 29 July 2015 | 7:55 pm

Windows 10 and DirectX 12 released!

One giant leap for gamers! It’s been less than 18 months since we announced DirectX 12 at GDC 2014.  Since that time, we’ve been working tirelessly with game developers and graphics card vendors to deliver an API that offers more control over graphics hardware than ever before.  When we set out to design DirectX 12, game developers gave us a daunting set of requirements: 1)      Dramatically reduce CPU overhead while increasing GPU performance 2)      Work across the Windows and Xbox One ecosystem 3)      Provide support for all of the latest graphics hardware features Today, we’re excited to announce the fulfillment of these ambitious goals!  With the release of Windows 10, DirectX 12 is now available for everyone to use, and the first DirectX 12 content will arrive in the coming weeks.  For a personal message from our Vice President of Development, click here.  What will DirectX 12 do for me? We’re very pleased to see all of the excitement from gamers about DirectX 12!  This excitement has led to a steady stream of articles, tweets, and YouTube videos discussing DirectX 12 and what it means to gamers.  We’ve seen articles questioning whether DirectX 12 will provide substantial benefits, and we’ve seen articles that promise that with DirectX 12, the 3DFX Voodoo card you have gathering dust in your basement will allow your games to cross the Uncanny Valley. Let’s set the record straight.  We expect that games that use DirectX 12 will: 1)      Be able to write to one graphics API for PCs and Xbox One 2)      Reduce CPU overhead by up to 50% while scaling across all CPU cores 3)      Improve GPU performance by up to 20% 4)      Realize more benefits over time as game developers learn how to use the new API more efficiently To elaborate, DirectX 12 is a paradigm shift for game developers, providing them with a new way to structure graphics workloads.  These new techniques can lead to a tremendous increase in expressiveness and optimization opportunities.   Typically, when game developers decide to support DirectX 12 in their engine, they will do so in phases.  Rather than completely overhauling their engine to take full advantage of every aspect of the API, they will start with their DirectX 11 based engine and then port it over to DirectX 12.  We expect such engine developers to achieve up to a 50% CPU reduction while improving GPU performance by up to 20%.   The reason we mention “up to” is because every game is different – the more of the various DirectX 12 features (see below) a game uses, the more optimization they can expect. Over time, we expect that games will build DirectX 12’s capabilities into the design of the game itself, which will lead to even more impressive gains.  The game “Ashes of the Singularity” is a good example of a game that bakes DirectX 12’s capabilities into the design itself.  The result:  a RTS game that can show tens of thousands of actors engaged in dozens of battles simultaneously.  Speaking of games, support for DirectX 12 is currently available to the public in an early experimental mode in Unity 5.2 Beta and in Unreal 4.9 Preview, so the many games powered by these engines will soon run on DirectX 12. In addition to games based on these engines, we’re on pace for the fastest adoption of a new DirectX technology that we’ve had this millennium – so stay tuned for lots of game announcements! What hardware should I buy? The great news is that, because we’ve designed DirectX 12 to work broadly cross a wide variety of hardware, roughly 2 out of 3 gamers will not need to buy any new hardware at all.  If you have supported hardware, simply get your free upgrade to Windows 10 and you’re good to go.  However, as a team full of gamers, our professional (and clearly unbiased) opinion is that the upcoming DirectX 12 games are an excellent excuse to upgrade your hardware.  Because DirectX 12 makes all supported hardware better, you can rest assured that whether you speed $100 or $1000 on a graphics card, you will benefit from DirectX 12. But how do you know which card is best for your gaming dollar?  How do you make sense of the various selling points that you see from the various graphics hardware vendors?  Should you go for a higher “feature level” or should you focus on another advertised feature such as async compute or support for a particular bind model? Most of these developer-focused features do provide some incremental benefit to users and more information on each of these can be found later in this post. However, generally speaking, the most important thing is to simply get a card that supports DirectX 12.  Beyond that, we would recommend focusing on how the different cards actually perform on real games and benchmarks.  This gives a much more reliable view of what kind of performance to expect. DirectX 11 game performance is widely available today, and we expect DirectX 12 game performance to be data available in the very near future.  Combined, this performance data is a great way to make your purchasing decisions.    Technical Details: (note much of this content is taken from earlier blogs)   CPU Overhead Reduction and Multicore Scaling   Pipeline state objects Direct3D 11 allows pipeline state manipulation through a large set of orthogonal objects.  For example, input assembler state, pixel shader state, rasterizer state, and output merger state are all independently modifiable.  This provides a convenient, relatively high-level representation of the graphics pipeline, however it doesn’t map very well to modern hardware.  This is primarily because there are often interdependencies between the various states.  For example, many GPUs combine pixel shader and output merger state into a single hardware representation, but because the Direct3D 11 API allows these to be set separately, the driver cannot resolve things until it knows the state is finalized, which isn’t until draw time.  This delays hardware state setup, which means extra overhead, and fewer maximum draw calls per frame. Direct3D 12 addresses this issue by unifying much of the pipeline state into immutable pipeline state objects (PSOs), which are finalized on creation.  This allows hardware and drivers to immediately convert the PSO into whatever hardware native instructions and state are required to execute GPU work.  Which PSO is in use can still be changed dynamically, but to do so the hardware only needs to copy the minimal amount of pre-computed state directly to the hardware registers, rather than computing the hardware state on the fly.  This means significantly reduced draw call overhead, and many more draw calls per frame. Command lists and bundles In Direct3D 11, all work submission is done via the immediate context, which represents a single stream of commands that go to the GPU.  To achieve multithreaded scaling, games also have deferred contexts available to them, but like PSOs, deferred contexts also do not map perfectly to hardware, and so relatively little work can be done in them. Direct3D 12 introduces a new model for work submission based on command lists that contain the entirety of information needed to execute a particular workload on the GPU.  Each new command list contains information such as which PSO to use, what texture and buffer resources are needed, and the arguments to all draw calls.  Because each command list is self-contained and inherits no state, the driver can pre-compute all necessary GPU commands up-front and in a free-threaded manner.  The only serial process necessary is the final submission of command lists to the GPU via the command queue, which is a highly efficient process. In addition to command lists, Direct3D 12 also introduces a second level of work pre-computation, bundles.  Unlike command lists which are completely self-contained and typically constructed, submitted once, and discarded, bundles provide a form of state inheritance which permits reuse.  For example, if a game wants to draw two character models with different textures, one approach is to record a command list with two sets of identical draw calls.  But another approach is to “record” one bundle that draws a single character model, then “play back” the bundle twice on the command list using different resources.  In the latter case, the driver only has to compute the appropriate instructions once, and creating the command list essentially amounts to two low-cost function calls. Descriptor heaps and tables Resource binding in Direct3D 11 is highly abstracted and convenient, but leaves many modern hardware capabilities underutilized.  In Direct3D 11, games create “view” objects of resources, then bind those views to several “slots” at various shader stages in the pipeline.  Shaders in turn read data from those explicit bind slots which are fixed at draw time.  This model means that whenever a game wants to draw using different resources, it must re-bind different views to different slots, and call draw again.  This is yet another case of overhead that can be eliminated by fully utilizing modern hardware capabilities. Direct3D 12 changes the binding model to match modern hardware and significantly improve performance.  Instead of requiring standalone resource views and explicit mapping to slots, Direct3D 12 provides a descriptor heap into which games create their various resource views.  This provides a mechanism for the GPU to directly write the hardware-native resource description (descriptor) to memory up-front.  To declare which resources are to be used by the pipeline for a particular draw call, games specify one or more descriptor tables which represent sub-ranges of the full descriptor heap.  As the descriptor heap has already been populated with the appropriate hardware-specific descriptor data, changing descriptor tables is an extremely low-cost operation. In addition to the improved performance offered by descriptor heaps and tables, Direct3D 12 also allows resources to be dynamically indexed in shaders, providing unprecedented flexibility and unlocking new rendering techniques.  As an example, modern deferred rendering engines typically encode a material or object identifier of some kind to the intermediate g-buffer.  In Direct3D 11, these engines must be careful to avoid using too many materials, as including too many in one g-buffer can significantly slow down the final render pass.  With dynamically indexable resources, a scene with a thousand materials can be finalized just as quickly as one with only ten. Modern hardware has a variety of different capabilities with respect to the total number of descriptors that can reside in a descriptor heap, as well as the number of specific descriptors that can be referenced simultaneously in a particular draw call.  With DirectX 12, developers can take advantage of hardware with more advanced binding capabilities by using our tiered binding system.  Developers who take advantage of the higher binding tiers can use more advanced shading algorithms which lead to reduced GPU cost and higher rendering quality.     Increasing GPU Performance   GPU Efficiency Currently, there are three key areas where GPU improvements can be made that weren’t possible before: Explicit resource transitions, parallel GPU execution, and GPU generated workloads.  Let’s take a quick look at all three. Explicit resource transitions In DirectX 12, the app has the power of identifying when resource state transitions need to happen.  For instance, a driver in the past would have to ensure all writes to a UAV are executed in order by inserting ‘Wait for Idle’ commands after each dispatch with resource barriers.   If the app knows that certain dispatches can run out of order, the ‘Wait for Idle’ commands can be removed.   Using the new Resource Barrier API, the app can also specify a ‘begin’ and ‘end’ transition while promising not to use the resource while in transition.  Drivers can use this information to eliminate redundant pipeline stalls and cache flushes. Parallel GPU execution Modern hardware can run multiple workloads on multiple ‘engines’.  Three types of engines are exposed in DirectX 12: 3D, Compute, and Copy.  It is up to the app to manage dependencies between queues. We are really excited about two notable compute engine scenarios that can take advantage of this GPU parallelism: long running but low priority compute work; and tightly interleaved 3D/Compute work within a frame.  An example would be compute-heavy dispatches during shadow map generation. Another notable example use case is in texture streaming where a copy engine can move data around without blocking the main 3D engine which is especially great when going across PCI-E.  This feature is often referred to in marketing as “Async computing” GPU-generated workloads ExecuteIndirect is a powerful new API for executing GPU-generated Draw/Dispatch workloads that has broad hardware compatibility.  Being able to vary things like Vertex/Index buffers, root constants, and inline SRV/UAV/CBV descriptors between invocations enables new scenarios as well as unlocking possible dramatic efficiency improvements. Multiadapter Support DirectX 12 allows developers to make use of all graphics cards in the system, which opens up a number of exciting possibilities:.   1)      Developers no longer need to rely on multiple manufacturer-specific code paths to support AFR/SFR for gamers with CrossFire/SLI systems 2)      Developers can further optimize for gamers who have CrossFire/SLI systems Developers can also make use of the integrated graphics adapter, which previously sat idle on gamer’s machines.   Support for new hardware features DirectX has the notion of “Feature Level” which allows developers to use certain graphics hardware features in a deterministic way across different graphics card vendors.  Feature levels were created to reduce complexity for developers.  A given “Feature Level” combines a number of different hardware features together, which are then supported (or not supported) by the graphics hardware.  If the hardware supports a given “Feature Level” the hardware must support all features in that “Feature Level” and in previous “Feature Levels.”  For instance, if hardware supports “Feature Level 11_2” the hardware must also support “Feature Level 11_1, 11_0”, etc.  Grouping features together in this way dramatically reduces the number of hardware permutations that developers need to worry about.  “Feature Level” has been a source of much confusion because we named it in a confusing way (we even considered changing the naming scheme as part of the DirectX 12 API release but decided not to since changing it at this point would create even more confusion).  Despite the numeric suffix of “Feature Level ‘11’” or “Feature Level ‘12’”, these numbers are mostly independent of the API version and are not indicative of game performance. For example, it is entirely possible that a “Feature Level 11” GPU could substantially outperform “Feature Level 12” GPU when running a DirectX 12 game..  With that being said, Windows 10 includes two new “Feature Levels” which are supported on both the DirectX 11 and DirectX 12 APIs (as mentioned earlier, “Feature Level” is mostly independent of the API version).  Feature Level 12.0 Resource Binding Tier 2 Tiled Resources Tier 2: Texture3D Typed UAV Tier 1 Feature Level 12.1 Conservative Rasterization Tier 1 ROVs More information on the new rendering features can be found here. You passed the endurance test! If you’ve made it this far, you should check out our instructional videos here.  Ready to start writing some code?  Check out our samples here to help you get started.  Oh, and to butcher Churchill - this is end of the beginning, not the end or the beginning of the end.  There is much, much more to come.  Stay tuned!   

Posted by on 29 July 2015 | 6:37 pm

Visual Studio template for cross-platform OpenGL development

Today the Visual Studio team shipped a project template for cross-platform graphics development.  This uses the Visual Studio shared project mechanism to target the Windows Universal Platform, Android, and iOS, with identical OpenGL ES 2.0 rendering code shared across all three platforms. I'm posting this here partly because I think it is cool, but more importantly because I had a hand in making it happen.  In order to run portable GL rendering code on Windows, this template uses a version of ANGLE that is maintained by my team.  When you compile it for Windows, Visual Studio will automatically pull down our ANGLE binaries from NuGet.   The Visual Studio project looks like so:     Let's be honest, this is not the most visually exciting graphics demo ever :-)  But hey, just insert some different draw calls, different GLSL shaders, different vertices, maybe a few textures, and you could make it draw something far more interesting!

Posted by on 29 July 2015 | 6:12 pm

Small Basic Game Programming - Game Math

Once I wrote a series of blog posts about game programming in Small Basic.  Today, I'd like to add one more post about game math. In this article, I'll talk about following Math operations. Random Number Remainder Trigonometric Functions Math for Kinetics (Dynamics) Math for Collision Random Number Random number is sometimes used to such as random action of enemy or AI (artificial intelligence).  A simple sample is in a blog post Small Basic Game Programming - Let's start with RPS game.  And you can learn more about random number in another post Small Basic - Random Numbers. Remainder Remainder is a useful function for such as following case: to check a number is odd or even to convert large angle (>= 360 decree) to small (< 360) to do another operation once every N times to get column from a 2-D board that implemented as 1-D array Following code is a sample of the last one in the list above.  This code is from my 2048 game (CLZ771-0). Sub Board_CellToIndex   ' param col, row - cell position   ' return i - index of board array   i = (row - 1) * 4 + col EndSub Sub Board_IndexToCell   ' param i - index of board array   ' return col, row - cell position   col = Math.Remainder(i - 1, 4) + 1   row = Math.Floor((i - 1) / 4) + 1 EndSub Trigonometric Functions Trigonometric functions will be used in following situations: to rotate Shapes not from their centers (using sin and cos) to convert a polar coordinate to a rectangular coordinate (using sin and cos) to convert a rectangular coordinate to a polar coordinate (using arctan) Following subroutine converts a rectangular (Cartesian) coordinate (x, y) to a polar coordinate (r, a).  This subroutine uses tan-1 (arctan) function.  For example, this subroutine is used in my game Dragon vs Turtle (HMP803-5) to get an angle for moving the Turtle from a mouse position. Sub Math_CartesianToPolar   ' Math | convert Cartesian coordinate to polar coordinate   ' param x, y - Cartesian coordinate   ' return r, a - polar coordinate (0<=a<360)   r = Math.SquareRoot(x * x + y * y)   If x = 0 And y > 0 Then     a = 90 ' [degree]   ElseIf x = 0 And y < 0 Then     a = -90   Else     a = Math.ArcTan(y / x) * 180 / Math.Pi   EndIf   If x < 0 Then     a = a + 180   ElseIf x >= 0 And y < 0 Then     a = a + 360   EndIf EndSub Math for Kinetics (Dynamics) Motion of a point is calculated from it's acceleration (a [m/s2]), velocity (v [m/s]), displacement (d [m]) and time (t [s]).  Delta velocity is calculated from following equation. Delta displacement is calculated from following equation. Following graph shows Δd as a dark gray trapezoid.  And the slope of the graph shows acceleration.  This t-v graph is drawn by a program JLF545-1. My game Lunar Module (DTF312-2) uses this kind of calculation. Math for Collision Today I'd like to introduce two easy way to calculate collision.  The first one is a circle and walls of the graphics window.  Following code is from a program LDH017.  And this code is checking the circle position with walls (x<=0, x>=the graphic window width, y<=0 and y>=the graphics window height).     ' collision detect with walls     If (_x - _s[id]["w"] / 2) <= 0 Then       _x = _x - 2 * (_x - _s[id]["w"] / 2)       _s[_id]["vx"] = -_s[_id]["vx"] * _s[_id]["r"]     ElseIf  gw <= (_x + _s[id]["w"] / 2) Then       _x = _x - 2 * (_x + _s[id]["w"] / 2 - gw)       _s[_id]["vx"] = -_s[_id]["vx"] * _s[_id]["r"]     EndIf     If (_y - _s[id]["h"] / 2) <= 0 Then       _y = _y - 2 * (_y - _s[id]["h"] / 2)       _s[_id]["vy"] = -_s[_id]["vy"] * _s[_id]["r"]     ElseIf  gh <= (_y + _s[id]["h"] / 2) Then       _y = _y - 2 * (_y + _s[id]["h"] / 2 - gh)       _s[_id]["vy"] = -_s[_id]["vy"] * _s[_id]["r"]     EndIf The second one is between circles.  To detect collision between two circles, calculate the distance of two centers.  This method can be used when the shape is not circle.  Following games use this method. DONKEY KONG in the Small Basic World (FGM769-8) Turtle Dodger (QZN342-3) Following code is from DONKEY KONG.  The d means the distance between Mario and a barrel.     nb = barrel["num"]     For ib = 1 To nb       xb = barrel[ib]["x"] + 16       yb = barrel[ib]["y"] + 16       d = Math.SquareRoot(Math.Power((xb - xm), 2) + Math.Power((yb - ym), 2))       If d < 20 Then         mario["hit"] = "True"         mario["vx"] = 0         ym = ym - 24         mario["vr"] = 720       EndIf     EndFor See Also Small Basic: How to Use Trigonometric Functions Small Basic: Dynamic Graphics for more details about dynamics and collision

Posted by on 29 July 2015 | 6:06 pm

Office 365 Monitoring and Alerting For Free

I rarely do specific partner or vendor promotion, but this time going to do small exception. Many SharePoint people know Steve Peschka by name from while back and I used to work with him on numerous projects, like with the SP2013 readiness materials for developers and IT pro’s. Steve is one of those guys who are absolutely honest and always highly helpful for others. I personally consider him as one of biggest influencer and mentors during my own journey in Microsoft. Steve jumped away from the Microsoft wagon while back, but did not go far. He has still continued sharing his wisdom with his Share-n-Dipity Blog. blog and has released a free tool for Office 365 customers to monitor their tenant availability for free. This is really great tool for any customer using Office 365 from monitoring perspective. Obviously we hope and target not to have actual service breaks in the Office 365, but since we are talking about IT systems created by humans, there can be some unexpected situations, which are important to know about. Introduction As you all know, a frustrating problem for cloud customers is that you are at the mercy of the Service Providers to know of any outages.  The Service Provider may not know of an outage affecting your tenancy, may not post notifications on their service portals timely or frequently enough - and you are helpless fielding calls from frustrated users.  I have seen this happen at multiple customers during outages. There is a new freemium product designed to help you with these issues called Office365Mon that provides you with health information for your Office365 Tenancy.  You can check it out at office365mon.com What’s the product? The product provides 24×7 monitoring of SharePoint Online Sites and Exchange Online Mailboxes that are hosted in Office 365.  It simulates end-user actions periodically, watching for any degradation in service health.  The product sends you email and text-based alerts if it detects any problems.  This keeps you in the know of your tenant’s health, before users call and even before Office 365 updates its Service Health portal.  In addition to the monitoring and notification, there are both reports and APIs you can use with the Office365Mon service. Basic reports are also included for free and give you a view into the outages your tenant resources have had, the recent health checks that were issued to it, monthly average response time, etc. You also get a free “My Info” page, which provides you a complete overview of the health of your tenant. Office365Mon recently announced a partnership with Microsoft to be an early adopter of a new Office 365 Service Communications API. By integrating that you can go to one page and see your basic subscription info, the current online status of your services based on Office365Mon monitoring, the health status of all of your Office 365 services, down to the feature level according to Microsoft, the latest messages that Microsoft has made available to your Service Health Dashboard, and your current monthly availability for your tenant services. Here’s a screenshot of all of that data in the My Info page: If you sign up for the Premium Features you get access to the Advanced Reports, which are a set of graphical reports for your outage history, recent performance history, monthly availability and downtime, and outage reasons. It also includes access to a reporting API that can be used to manually download reporting CSV files or programmatically retrieve report data as CSV or JSON. There is an additional API that can be used to for managing your Office365Mon subscription – who the subscription admins are, what text phone numbers or email addresses are notified in the case of outages, what resources are being monitored, etc. All of the product features can be managed via the web site at https://office365mon.com, and it’s also mobile friendly so it works really well on a phone or tablet as well. One other thing – the service lets you have multiple subscriptions so it works great if you’re an Office 365 reseller and you want to turn on monitoring for your customers. Who’s the Product For? The works for anyone who wants to keep a tab on the service health of their tenant or that of a customer they work with. Service Engineer in IT who’s responsible for day to day operations Helpdesk Professional who wants to keep a tab on general service issues Support Engineer who is working with a given Office 365 customer regarding their reliability issues Reseller who wants to know the health of the tenancies they sold Consulting professional working on an Office 365 solution Sales Specialist to ensure their demo tenancies are in top shape ahead of critical customer briefings Super user at your company who is typically first to help your colleagues on their tech problems All-around admin at the small business who takes care of everything from sales to IT And more… How Do You Get Started? The product is intended to be super easy to sign-up and get started.  You sign in with the same account you use to sign into Office 365.  The site lets you know what all you need to provide to get your subscription minimally configured. It also implements checks for you along the way – if you add a new notification address, it sends an email to it right away to verify it works. When you add a mailbox to monitor, it validates that it can connect to it right away and lets you know if there are any problems. Once everything is minimally configured it starts issuing health probes within a couple of minutes. It also has a set of links at the top of the page that you can start using right away to see how your tenant is doing.  If you would like to learn more about the specifics of how Office365Mon performs Authentication and Authorization, see this post. Easy Office 365 Monitoring for Free There aren’t a lot of free things these days, but all of the basic monitoring, alerting and reporting are free for life at Office365Mon. It’s easy to use and quick to set up. If you want to stay on top of the health of your Office 365 tenant spend two minutes and set up a free subscription.

Posted by on 29 July 2015 | 6:03 pm

Imagine Cup World Finals 2015, Day Three: Competition Day

Months of work and miles of travel finally behind them, 33 teams competed today for the Imagine Cup in one of three categories: World Citizenship, Innovation and Games.  Luca Bruno anxiously awaits his time for Team #idon’tgiveanapp to present HeartWatch to the judges. As students waited for their assigned time slot to present, they sported serious game faces. Three teams hit the ground running with an 8 a.m. presentation while others, like Team Octavian of Nepal, had to endure waiting until 3:30 p.m., the very last time slot. The energy was palpable. Everyone seemed nervous, even those not competing. It turns out that nerves, like laughter, were contagious. To battle the nerves early this morning, Team Australia hosted a mini-dance party for themselves to Taylor Swift’s “Shake it Off.” Aria Chernova entertains the judges with Team Izhard’s video game, Ovivo. As little pods of teams hunkered down, scattered about Building 92 of the Microsoft campus in little nooks, some stared into the void, chewing their nails. Others locked on to their computer screen tweaking slides at the last minute. Still others practiced their presentations with each other for the umpteenth time.  They went in nervous, they come out relieved. As students exited the presentation rooms, they sighed collectively. Team Izhard’s Daria Chernova jumped with excitement after presenting the team’s game, Ovivo. Clearly, they felt great about their presentation and product. Team New Zealand wandered by after their presentation, debriefing one other, pointing out where they could have improved. Wati Pitrianingsih from Team Alix from Indonesia builds the case for their World Citizenship app, Solidare. When you watch, you feel like these students are your friends, you care about them, you almost burst with pride for this generation of such inventive, driven young people.   But still, the work wasn't over. After their presentations, teams entertained judges at their showcase booths. Here judges from each category interacted with the app and games in real-time and asked questions about the technology, user interface and even suggested marketing plans. Judge Edward Happ (CIO of The Red Cross) asked Team Alix business-oriented questions. “What are the costs you will have to run this business?” Wati Pitrianingsih obviously prepared herself for these kinds of questions and answers in kind. Team EpSyDet listened with rapt attention to judge Janine Firpo (of the Gates Foundation) as she provided feedback on the the user experience of their Epilepsy detection app. Team BCR: Brain Controlled Robot, from the Palestinian Authority showcase their robot to the CEO of Raspberry Pi Trading, Eben Upton. The work is done. Now the waiting begins. But they won’t find out the results of their hard work until tomorrow morning at the Awards Breakfast. Stay tuned as we cover the awards live on Twitter (@MSFTImagine) and via this blog. And don’t forget to tune into the live webcast of the Imagine Cup Championship at noon PT on July 31 and see the next Imagine Cup World Champion crowned.    

Posted by on 29 July 2015 | 5:45 pm

IntelliTrace, method call information, and Edit and Continue

In this blog post I’m going to talk about IntelliTrace’s default configuration and the ability to opt-in to capture method call information. If you haven’t done so already, check out the announcement of IntelliTrace in Visual Studio 2015 which gives you an overview of IntelliTrace and its improved UI. If you’re not already familiar with IntelliTrace, that post will help set the context. The following image shows you the IntelliTrace settings which you can access through Tools > Options > IntelliTrace: Option A – Default configuration: IntelliTrace collects events only, optimized for low overhead, Edit and Continue is unaffected Option B – Capture method call information: IntelliTrace records every single method call and captures some parameter information, Edit and Continue is disabled Let’s go further into how these two options allow you to tailor the IntelliTrace experience to your debugging needs. Option A - Default configuration When you install Visual Studio, IntelliTrace is enabled by default and is set up to only collect events (option A above). This means that IntelliTrace will only record state information for moments in time it considers interesting rather than monitoring and recording every single method call. The diagram below shows you an example of how that works with a desktop app using ADO.NET: The default configuration is optimized for best debugging performance and supports EnC. As you can see in the screenshot below, I have the Diagnostic Tools window showing me the IntelliTrace events collected on the right and on the left I’m using EnC to change my code while still debugging (look for my cursor one line below the breakpoint): For more information on the new IntelliTrace UI within the Diagnostic Tools window and how to consume the events collected by IntelliTrace check out the section titled “Live Debugging using IntelliTrace in Visual Studio 2015” from the announcement of IntelliTrace in Visual Studio 2015. Option B - Opt-in to collect method call information You have the choice to change IntelliTrace’s default configuration to collect events as well as every single method call that takes place (option B above). If the parameters are primitives (including strings), their values be captured as well. If the parameters are structs or complex types (i.e. instances of classes) only the properties that are primitive data types will be captured. When enabled, a drop down appears above the filtering control in the Diagnostic Tools window. You can view the method call information that IntelliTrace has collected by changing the view to “Calls” using the drop-down: You can also navigate the history of the calls made using IntelliTrace toolbar that appears inside of the code editor, using DataTips to examine the value of the parameters: Read more on how to consume the information shown in the Calls view as well as how to use the IntelliTrace toolbar that appears within the margin of the code editor. Unfortunately, the API that IntelliTrace uses to collect the method call data doesn’t support Edit and Continue. This is the only case where using IntelliTrace affects your ability to use End and Continue, and hence you are unfortunately forced to pick which of the two productivity features you need most for that particular debugging session. There are other cases where EnC doesn’t work regardless of IntelliTrace, but we are addressing that with continued improvements in EnC. Wrapping up We are always looking for feedback and comments, especially if there is a feature you find useful. You can leave general comments & questions at the end of this blog post or via Send-a-Smile, and submit feature requests to our IntelliTrace UserVoice. You can also send us a tweet.

Posted by on 29 July 2015 | 5:07 pm

New Mail Simulator Tool Released

Office Interoperability is pleased to announce a new tool – MailSim (for Mail Simulator). Unlike other test tools, MailSim uses an Outlook 2013 client to generate real traffic for your email server. MailSim automates Outlook 2013 behavior to provide defined and repeatable email traffic and folder operations. For realistic testing, there can be many client machines for one email server, where each client runs its own instances of MailSim.exe and Outlook 2013. Each MailSim/Outlook client can perform its own actions, on its own schedule, for hours to weeks of combined testing. MailSim reads user-defined configuration files that completely determine subsequent actions. You have the option to specify sets of email account names, attachments, and folders. Alternately, MailSim can be directed to choose random subsets of these. The MailSim repository is located on Github here. To ask questions, or report problems, please use the repository's Issues page.  

Posted by on 29 July 2015 | 3:35 pm

What is new with Serial in Windows 10

Authored by George Roussos [MSFT]The Serial Communication protocol is everywhere; it is broadly available, easy to learn, and low cost.   It is used across many different transports: typically over USB, in cases over Bluetooth and even over TCP/IP.   Many people are familiar with COM ports and programs that read data from and/or write data to them.  Today we find Serial Communications in both 30 year old hardware like natural gas meters and new products like many 3D Printers or those in the prototyping stage based upon Arduino boards.   We listened to customer feedback on Serial while planning Windows 10 and acted upon two high level points: Improve Serial over USB driver support from Windows. Provide a Windows Runtime API for communication with Serial devices. This blog entry focuses on enhancements for USB connected Serial devices in Windows 10, and how customers can provide additional feedback on them which we can efficiently act upon. 1.   Improved Serial over USB driver support in Windows 10 Earlier versions of Windows contained a driver for USB connected serial devices: usbser.sys. However the driver did not include a compatible ID match in an INF. The driver had to be included using modem INFs which was not standard. In Windows 10, we added inbox support for USB CDC Abstract Control Model (ACM) compliant hardware. Usbser.sys is now installed as a compatible ID match for USB CDC compliant hardware, without requiring a 3rd party driver or inclusion via modem INFs. Now devices that report these compatible IDs:  USB\Class_02&SubClass_02&Prot_01 USB\Class_02&SubClass_02 … including popular prototyping boards like Arduinos – just work with our built-in driver.   Also usbser.sys has been completely re-written in WDF, improving its overall reliability as well incorporating new features for power management i.e. USB Selective Suspend.  See USB Serial driver on MSDN for details. 2.   A Windows Runtime API for communication with Serial devices Windows 10 includes the Windows.Devices.SerialCommunication universal API designed for these three scenarios: USB Peripherals like Arduinos – including as a USB Accessory on new Phones and Tablets Peripherally connected USB to RS-232 Adapters UARTs inside embedded  systems like MinnowBoard Max or Raspberry Pi v2 running Windows IoT SKU This API does not include support for accessing UARTs/COM ports inside Phones/Tablets/PCs; Windows 10 focused on above 3 scenarios.//Build/ 2015 USB Accessories including SerialCommunications session introduces this API and walks thru the design and usage of it.Windows 10 SDK includes two Universal SDK samples illustrating this API: CustomSerialDeviceAccess SDK Sample New SerialArduino SDK Sample from above //build talk is now available including C# and Arduino sketch source code. How to Provide Feedback We listen and act upon customer feedback; all of above are all results of prior feedback.   If you have encountered a problem with functionality described in this blog entry, or want additional functionality, please see below. Our team listens to two feedback channels to provide feedback:  Forums and the Feedback App (see Feedback App post for additional information) are available to everyone.   Please follow below guidance on where to provide your feedback and what to include to help us efficiently act upon your feedback. Forums Please create a new post on the Windows Insider Program forum under the ‘Devices and Drivers’ Topic. Feedback App Please file a bug under:Category:  Hardware, Devices, and DriversSub-Category:  USB devices and connectivity What information to include To help us efficiently act upon any bugs for feedback you have, please include relevant information below. Problems with built-in USBSer.sysUSBSerial driver for USB CDC compliant devices Feedback or Bugs:  Please include ‘USBSer’ in bug title Bugs, please add:  Crisp steps to reproduce the issue Hardware IDs and Compatible IDs for target device  (below) If problem involves data transfer, please include Traces as described under Tracing below. Problems with Windows.Devices.SerialCommunication Universal API Feedback or Bugs: Please include ‘Windows.Devices.SerialCommunication’ in bug title Bugs, please add: a sample code fragment that illustrates the problem All App manifest device capabilities declarations,  like  <DeviceCapability Name="serialcommunication"> <Device Id="any"> <Function Type="name:serialPort" /> </Device> </DeviceCapability>  <DeviceCapability Name="serialcommunication"> <Device Id="any"> <Function Type="name:serialPort" /> </Device> </DeviceCapability> How to capture Hardware IDs Attach your Arduino, open Device Manager, select the board, select Properties, then Details tab Select Hardware IDs from Property dropdown Select the values, right click and “copy” them – and paste into the bug. Select Compatible IDs from Property dropdown Select the values, right click and “copy” them – and paste into the bug. Example: Arduino Uno R3 Hardware IDsUSB\VID_2341&PID_0043&REV_0001USB\VID_2341&PID_0043 Compatible IDsUSB\Class_02&SubClass_02&Prot_01USB\Class_02&SubClass_02USB\Class_02 Tracing Please enter the copy and paste below commands into an Administrative command window, reproduce the problem, and attach the resultant trace files into a bug: Before Reprologman create trace -n Serial_WPP -o %SystemRoot%\Tracing\Serial_WPP.etl -nb 128 640 -bs 128 logman update trace -n Serial_WPP -p {7F82DC23-235A-4CCA-868C-59531F258662} 0x7FFFFFFF 0xFF logman update trace -n Serial_WPP -p {8FBF685A-DCE5-44C2-B126-5E90176993A7} 0x7FFFFFFF 0xFF logman update trace -n Serial_WPP -p {0ae46f43-b144-4056-9195-470054009d6c} 0x7FFFFFFF 0xFF logman start -n Serial_WPP <Reproduce the problem at this point (do not copy and paste this)> AfterRepro logman stop -n Serial_WPP logman delete -n Serial_WPP md %systemroot%\Tracing\Logs move /Y %SystemRoot%\Tracing\Serial_WPP_000001.etl %SystemRoot%\Tracing\Logs\Serial_WPP.etl 

Posted by on 29 July 2015 | 3:32 pm

Monitor vehicle traffic using Power BI

Today we have Spyros Sakellariadis, joining us again to continue our series of how to use Power BI to monitor real-time IoT events.  Take it away Spyros!   By Spyros Sakellariadis Who hasn’t wanted to have a personalized view of the vehicle traffic on the route to and from their office, for example? Let’s do it by getting data from a public data feed into Azure and Power BI! At the same time, this will serve as an example of how to get data from thousands of public data feeds and visualize them with Power BI. In a previous post, I showed how to get data from IoT sensors into Azure and Power BI based upon a simple infrastructure that we published in an open source project called ConnectTheDots. The constraint in that scenario is that those sensors need to be ones which you can access and program, and probably own. There is lots of valuable information, however, in public data sources which it would be great to be able to munge and view. To keep things simple, I’m going to use the same basic infrastructure to get the traffic data into Power BI, as I used in the previous post – with a twist. Instead of connecting the data sources (sensors in the previous blog, the WSDOT web service in this one) to a gateway running on a Raspberry Pi, we will run that same gateway code in Azure. Once we have pulled data from the WSDOT web service and sent it to an Azure Event hub, the rest is basically the same as we did last time. Easy as pie. Acquiring the data Since I live in the Seattle area, I am interested in knowing about the traffic in the state of Washington. I really, really care about the traffic on the two bridges separating Seattle and Bellevue (on highways I-90 and I-520), as I have to drive over them daily. I thought it might be a daunting task to collect that data, but fortunately the Washington State Department of Transportation provides data for 170 permanent traffic recorder locations in Washington State, including those two highways. Most are updated hourly, some as frequently as every 20 seconds. This data is accessed through a single gateway and API, as described on their website at http://wsdot.wa.gov/Traffic/api/. If you want to understand the traffic on the roadways, selecting the Traffic Flow link shows the function calls and fields retrieved, and the Help link shows how to call the service: http://wsdot.wa.gov/Traffic/api/TrafficFlow/TrafficFlowREST.svc/GetTrafficFlowsAsJson?AccessCode={ACCESSCODE} The site also contains a button to request the access code to include in the web service call. If you just paste that URL into your browser, you will get a batch of data similar to the following, refreshed every 20 seconds: 76, 4/29/2015 3:36:22 PM, Olympic, 005es13330, 005, Tacoma Ave, 133.3, 47.231150214,-122.441859377, 4, NB 267, 4/29/2015 3:36:22 PM, Olympic, 016es00965, 016,36th St., 9.65, 47.291790549, -122.564682487, 1, EB 282, 4/29/2015 3:36:22 PM, Olympic, 016es00418, 016, Jackson DS, 4.18, 47.258535035, -122.527057529, 1, EB   At this point, we have our data source. Now it is just a matter of getting it into Azure and Power BI.   Getting the data into Azure To get the data from the WSDOT web service we wrote a simple Azure worker role that pulls data from the WSDOT URL shown above every 20 seconds, parses the data stream, and immediately sends it to an Azure instance of the ConnectTheDots gateway. Sample code, which you will need to modify, is available on the TrafficFlow branch of ConnectTheDots on GitHub. In the TrafficFlow worker role we parsed the data as JSON and shortened the names provided by the WSDOT schema. The first row of the data shown above would be parsed and formatted as follows: {  "Value”: 1.0,  "FlowDataID":  "76",  "Region":  "Olympic",  "StationName":  "005es13330",  "LocationDescription":  "Tacoma Ave ",  "Direction":  "NB",  "Latitude":  "47.231150214",  "Longitude":  "-122.441859377",  "MilePost":  "133.3",  "RoadName":  "005",  "TimeCreated":  "2015-04-29T03:36:22+00:00"  } And here is an export to Excel of a few of the 5,742,180 records received in 3 weeks (yes, 5.7M records!):     Now we just have to get this data into Azure, but it is worth going on one last tangent before showing how to construct the Power BI dashboard. As I mentioned earlier, I am interested in the traffic on the I-520 bridge as I have to commute over it. We can take the data for the last few weeks and filter it for (Road Name = 520 AND Location Description = Midspan), map the Flow Value to a color, and voila, the eastbound traffic pattern emerges:   Note one thing, though, that will be relevant when we create the Power BI dashboard – the time reported by the WSDOT web service and forwarded to our Event Hub is UTC, so the above table is really showing traffic around 1pm Seattle time, not 8pm.   Creating the Power BI dashboard Once our Azure worker role has sent the data to an Azure Event Hub, we can create a Stream Analytics job that queries the Event Hub with an Output to Power BI, just like we did in the IoT example from the previous blog. In this case, the query we have created is as follows: SELECT Region, StationName, RoadName, LocationDescription, Milepost, Direction, Latitude, Longitude, Value, CASE WHEN Value <2 THEN 'Light' WHEN Value <3 THEN 'Moderate' WHEN Value <4 THEN 'Heavy' ELSE 'StopAndGo' END AS TrafficCategory, TimeCreated, DATETIMEFROMPARTS(DATEPART(year,TimeCreated),DATEPART(month,TimeCreated), DATEPART(day,TimeCreated), DATEPART(hour,TimeCreated)-7,DATEPART(minute,TimeCreated), DATEPART(second,TimeCreated),00) AS TimeCreatedPST FROM     TrafficInput TIMESTAMP BY TimeCreated The only tricky thing in the SQL query is to convert the UTC time into PST and adjust for daylight savings time. It would be more flexible to do this in the worker role in C#, where we could query the difference between UTC and PST and apply that instead of hard-coding 7 hours into the statement above, but we decided it was best to keep everything in UTC in the worker role to make it easier to combine with results from multiple time zones. When creating the output to Power BI you need to specify a DATASET NAME, which will appear in the Dataset list on the Microsoft Power BI dashboard. I chose to call the database “WSDOTdata”, and the corresponding table “WSDOTtable”. For detailed instructions on creating a Power BI dashboard, please refer to the previous blog. The essential steps to create a first report are repeated here: 1. Log in to http://app.powerbi.com, then create a dashboard by clicking “+” in the left menu and save. I created the Dashboard “TrafficFlow” 2. Select the Dataset, in my case “WSDOTdata”. You will see a blank dashboard with a field picker on the right. Given the way we formatted JSON after the Azure worker role pulled it from WSDOT, this is how it appeared in the field picker in Power BI:   3. Add fields Axis = timecreatedpst and Value=”∑ Max of value”. 4. Add Legend=trafficcategory. 5. On the same page, create three slicers, one for roadname, one for locationdescription, and one for direction. Save the report. 6. Using the slicers, select various roads, locations, and directions and pin each of them successively to the dashboard. For my first chart, I selected I-5 at 40th Ave W, going Northbound, and got the following chart:   This corresponds to almost equal ‘Light’ and ‘Moderate’ traffic, as you might expect for the early rush hour timeframe.  My nemesis, I-520, is somewhat worse, showing mostly ‘Moderate’ traffic:   Here is a dashboard with four locations shown:       Now this is beginning to tell me something useful! From my office, if I want to leave now to catch the ferry at Mukilteo, I can either go west on I-520 (Moderate traffic) or I-90 (mostly Stop and Go), then north either on I-405 (also mostly Stop and Go) or on I-5 (mostly Moderate). Decision clear… I-520 to I-5. Mind you, if I wait an hour, I-520 will be jammed too, and I might as well wait till after dinner before heading out. Takeaways There are thousands of public data sources you can access, and getting that data into Power BI is really not that different from accessing sensor data. Find a few, and start working with them along the lines shown in this post to see how easy it is to extract and present valuable insights. Even better, start thinking about how you can do more with the insights from the data, for example, sending alerts or notifications based upon various criteria, or spawning actions such as dispatching a technician, placing an order, or changing the settings on a thermostat. As I mentioned last time, for further reading, here are some useful links on the Power BI side: www.powerbi.com http://dev.powerbi.com http://blogs.msdn.com/powerbidev

Posted by on 29 July 2015 | 2:59 pm

Developing for Windows 10 with Visual C++ 2015

Getting Started Windows 10 introduces the new Universal Windows App platform, which allows a single codebase to be reused across multiple Windows 10 devices. An earlier blog post described the pre-release process of getting setup and some of the new features available in Universal Windows Apps. Now that Windows 10 RTM has released , the following steps will help you get your machine set up to develop Windows 10 apps using Visual Studio 2015 RTM. Install the official Windows 10 release ,...(read more)

Posted by on 29 July 2015 | 2:41 pm

Coming soon: Windows 10 books

Windows 10 is here and we have books in progress! Learn how to make a smooth transition to Windows 10 and get productive fast. Teach yourself core functionality, new features, and tips for working efficiently in the modern Windows environment. Learn more here. You can pre-order Windows 10 Step by Step now. In the meantime, check out this free ebook: Introducing Windows 10 for IT Professionals, Preview Edition. It will be updated with RTM info soon!

Posted by on 29 July 2015 | 1:46 pm

Debugging .NET Native Windows Universal Apps

With the release of Windows 10 we also shipped Visual Studio Tools for Windows 10. As you will have heard Universal Windows apps written in .NET (either C# or VB) will be compiled to native machine code before being deployed to customer devices using .NET Native. However, the default Debug configuration still uses .NET Core runtime to allow for a fast edit->compile->debug cycle. You always need to test with the actual code generation and runtime technology your application will use when running in production as it can expose bugs you might not be able to find with your development configuration (e.g. race conditions that manifest due to different performance characteristics). To facilitate this, when you choose the Release build configuration your app is compiled using the .NET Native tool chain. If you don’t find any issues with the .NET Native build of your application, great! However if do you run into any issues that require you to debug there are a few things to note: When you are debugging an app built with the .NET Native tool chain you will be using the .NET Native debug engine which has some capability differences from normal .NET debugging The Release configuration will be using code optimizations that will impact debugging Variable inspection of runtime types will be slightly different Code optimizations If you run into an issue when testing the Release configuration that you need to debug it is important to note that the Release configuration is by default fully optimized code (e.g. code inlining will be applied in many places). These optimizations will have a significant impact on the debugging experience including unpredictable stepping and breakpoint behavior (due to code inlining) and the inability to inspect most variables due to memory optimizations. This means you will want to debug a non-optimized .NET Native build. This fastest way to do this is simply to disable optimizations (covered below), however we recommend creating a new build configuration so your existing Debug and Release configurations continue to work the way you expect. To do this open the “Configuration Manager” from the build configuration dropdown. Then from the “Active solution configuration” dropdown choose “<New…>” Give it a name you will understand (e.g. “Debug .NET Native”) and choose “Copy settings from Release” and click “OK” Open the project’s property page, and on the build tab uncheck “Optimize code” You now have a build configuration you can easily switch to for debugging .NET Native specific issues. Assuming you can reproduce the issue with the non-optimized build this will yield a much better debugging experience. Inspecting Runtime Types One thing that will be different from debugging normal JIT based .NET applications is that when inspecting objects from the Windows Runtime (including XAML UI Elements) you will see a “[Native Implementation]” node. In order to inspect the elements properties you will need to look under this node (this is due to the fact that these objects are actually implemented in native code and are accessed via a thin .NET wrapper). When looking at this node, if you did not enable Microsoft Symbol Servers in your symbol settings you will see a message saying “information not available no symbols loaded for [module name]”. Right click on the “[Native Implementation]” frame and choose “Load Symbols”, the debugger will then download them from the Microsoft Public Symbol servers. Once symbols are loaded you will be able to inspect the property values from the “[Native Implementation]” node. For example, to see the value of the “Text” property on a Textbox object you need to look under the “[Native Implementation]” node—this does mean that if you hover over “textbox1.Text” in the editor while debugging .NET Native you won’t see a DataTip. Summary You likely won’t need to spend much time debugging your code compiled with .NET native, but when you do remember to disable code optimizations and that many properties for runtime types will require looking at the “[Native Implementation]” node. As always we want to hear how the debugging experience is working for you, and how we can improve it. So please let me know in the comments section below, through Visual Studio’s Send a Smile tool, or via Twitter.

Posted by on 29 July 2015 | 12:55 pm

Announcing Win2D version 1.0.0

466 days and 821 changes after our first commit, and nearly 11 months after we made the project public, today I am pleased to announce the release of Win2D version 1.0.0. This adds support for Visual Studio 2015 RTM, and provides a stable API that we will avoid breaking as we move on to adding vNext features.  It is no longer compatible with earlier Visual Studio 2015 RC builds. Note that the "Win2D" NuGet package is deprecated.  Instead, please use either "Win2D.uwp" or "Win2D.win81" depending on whether you are building for the Universal Windows Platform (UWP) or Windows/Phone 8.1. I’d like to take this opportunity to thank the early adopters who put up with the churn and lack of features in our early builds to give invaluable feedback on how we could make Win2D better.  To reuse a hoary cliché, what bugs remain are our fault, but all the bugs there aren’t is thanks to these brave pioneers!

Posted by on 29 July 2015 | 12:53 pm

Building Apps for Windows 10 with Visual Studio 2015

Today is an exciting day for Windows users and developers alike with the launch of Windows 10.  For developers, Windows 10 represents the culmination of our platform convergence journey with Windows running on a single, unified Windows core. This convergence enables one app targeting the Universal Windows Platform to run on every Windows device – on the phone in your pocket, the tablet or laptop in your bag, the PC on your desk, and the Xbox console in your living room. And that’s not even mentioning all the new devices being added to the Windows family, including the HoloLens, Surface Hub, and IoT devices like the Raspberry Pi 2. All these Windows devices will access one unified Store for app acquisition, distribution and updates. Today, I’m excited to announce that along with the RTM of Windows 10, you can now build apps for the Universal Windows Platform with Visual Studio 2015. The Windows Dev Center is also now open and accepting submissions of your Universal Windows apps. Since the first preview of these tools in March, we have made significant updates to many parts of Visual Studio to make it easy to build apps for the Universal Windows Platform. Throughout the course of the past few months, these tools have gotten better with your feedback – thank you! Acquiring the tools If you don’t already have Visual Studio 2015 RTM, you can install the free Community Edition. If you prefer the Professional or Enterprise edition, you can download them from VisualStudio.com, and during setup, choose ‘Custom’ to install the Tools for Universal Windows Apps. If you already have Visual Studio 2015 RTM, you can now add these tools to your existing Visual Studio installation. You can run the installer, or open Programs and Features from Control Panel, select Visual Studio and click Change. Then in setup, click Modify and select the Tools for Universal Windows Apps. Creating Projects You can create a Universal Windows apps with the new project templates in Visual Studio 2015 in a language of your choice – C#, VB, C++, or JavaScript. With Windows 10, it is now possible to have a single universal app project that when deployed can run on all Windows 10 devices like PC, Phone, Tablet, or XBox. However, just as on Windows 8.1, you still have the option to have multiple projects in your solution that you can tailor for functionality and form-factor exhibited by various devices running Windows 10 and can maximize code sharing across those projects using Shared projects. You can also create Win32 applications that target the Windows 10 SDK to leverage the new APIs exposed by the platform. .NET Framework libraries delivered as NuGet packages The entire set of .NET Framework libraries is included in your Universal Windows app as a set of NuGet packages (built on top of NuGet v3). In addition to providing you a rich .NET surface area that works consistently across all Windows 10 devices, this will also will allow us to bring newer APIs to you at a faster cadence. We will also be able to consistently evolve these APIs across all mobile devices that you can target using set of updated platform targets in Portable Class Libraries (PCLs). XAML Designer and Editor The Universal Windows platform allows you to tailor your app for any Windows 10 device using built-in platform capabilities. The XAML Designer has been enhanced to allow you to create and edit view states that can be triggered automatically when running under different form factors. Combined with new Windows 10 controls like the RelativePanel and the ability to specify completely tailored views on different devices, you should have the tool to delight your users with great user experiences. See this video to get an overview of just how easy it is to get started with designing your first XAML app. .NET Native improvements Windows 10 Universal apps built with C# and VB are optimized with .NET Native, which provides up to 60% faster start time and 15-20% less memory usage.  We have continued to make our .NET Native compiler throughput and error diagnostics better. We have also enabled cloud compilation of your apps in the Windows Dev Center, allowing us to eventually deliver fixes and improvements to your apps without requiring re-submissions. Learn more about .NET Native here. Packaging apps for the Store You can reserve names for your Universal Windows apps with the Store, and create app packages for submissions to the Windows Dev Center. Visual Studio also generates packages that are ready for side-loading in an enterprise scenario using the tools in the SDK. The manifest designer in Visual Studio has been updated to allow you to target the full breadth of capabilities that you can express for your app. Summary Try out the tools and send us your feedback via the Visual Studio Connect site, Send-a-Smile or the on Windows tools forums. We are looking forward to your submissions of apps targeting Windows 10 and demonstrating the entire breadth of capabilities unleash by this new platform release. Namaste!  

Posted by on 29 July 2015 | 12:30 pm

Targeting Windows 10 with your Apache Cordova app

This April, in concert with Windows 10 Technical Preview 2, we debuted support for the Windows 10 platform for Apache Cordova. Now, with the availability of Windows 10, full Windows 10 support is part of the Windows Apache Cordova platform and native to Visual Studio 2015.  Apache Cordova allows you to write apps targeting iOS, Android, Windows, and Windows Phone (and other platforms) with a single code base using HTML, JavaScript, and CSS. Support for the Universal Windows platform now means that from this single code base you can also target the entire family of Windows 10 devices—desktop, phone, tablet, IoT devices, and eventually HoloLens, Xbox, Surface Hub, and more. Your Cordova apps targeting Windows 10 can smoothly transition between usage modes, such as from desktop to tablet when the user removes the keyboard. They can also take advantage of all the new features of Windows 10, such as Cortana integration. Windows 8.1 apps written with HTML, CSS, and JavaScript—with or without Cordova—will continue to work with full fidelity on Windows 10. If you target Windows 10, however, you’ll be able to take advantage of changes to the security model for Windows Web apps that allow you to write apps in a way more natural to web programming styles. For instance, new security policy changes mean that you can use the JavaScript libraries you love, such as Angular and Bootstrap, while still having the power of direct access to the Windows Runtime Library (WinRT). You don’t need to be running Windows 10 on your development box to build, run, and test your Cordova Windows 10 app. If you’re running Windows 8.1, you can use an emulator to run your app. Also, if you’re running Windows 7 or Windows 8.1, you can deploy to a remote Windows 10 machine or device. If you prefer to build for Windows 8.1 and Windows Phone 8.1, you can continue to do so using the windows-target-version and windows-phone-target-version properties in your project’s config.xml file. Let’s get started building an Apache Cordova app targeting Windows 10. Installation Prerequisites Windows 10, Windows 8.1, or Windows 7 Visual Studio 2015 (includes tools for Windows 10) The next sections detail adding the Windows 10 Cordova platform to your project. Use the Cordova Windows platform If you have an existing project that you’d like to add Windows 10 support to, skip down to “Adding the Windows 10 platform to your Cordova project”. Create a new Cordova project Open Visual Studio and create a new project using the File > New > Project menu, then select the JavaScript > Apache Cordova Apps > Blank App template to create a new Cordova project. Add the Windows 10 platform to your Cordova project First, Visual Studio needs to use Cordova CLI version 5.1.1 or later. Open the Cordova configuration designer by double-clicking on config.xml in the Solution Explorer. Choose Platforms on the left and then set the Cordova CLI textbox to 5.1.1: Save your changes, and then close the configuration designer using the X in the tab strip. This is important because the next step will download a new configuration. Now set Windows-AnyCPU as your build target in the toolbar (as outlined below), and then build the solution using the Build > Build Solution menu item. Your first build will take a few moments because it’s using node package manager (npm) to pull down the updated Cordova platform and add it to your project. You can see its progress in the Output Window. Now that your project has the correct version of the Windows platform, reopen the configuration designer by double-clicking config.xml. Choose Windows on the left, double-click on config.xml to reopen the configuration designer, select the Windows tab on the left, and then change the Windows Target Version to Windows 10.0: Add a plugin to your app You can also add a plugin to your app through the configuration designer’s Plugin tab. This shows a list of core plugins on the Core tab, lets you point to any other plugin (whether local or on Git) on the Custom tab, and shows you which plugins are already part of your project on the Installed tab. Build and Run the app To build and run for Windows 10, ensure the build target is set to Windows-AnyCPU. To deploy the app, choose a deployment target. If you’re running Windows 10 on your dev box, you can choose Local Machine to deploy locally, Remote Machine to deploy to another Windows 10 device, or an emulator. If you’re running Windows 8.1, you can choose between Remote Machine and the mobile emulators. For Windows 7 developers, choose the Remote Machine option. If you are running on Windows 10 and choose to deploy locally, pick the deployment target Local Machine, then choose run. If you get a message about Developer Mode, follow the instructions to enable Developer Mode on your machine. After a short time, you should see your app running. If you’re running on Windows 8.1, you can choose one of the available emulators from the deployment target menu. Then run your app. You should see the emulator appear and your app running in it. Learn More You can find more information on building for and using the Windows 10 Cordova platform on the Apache Cordova site. If you’d like to learn more about building Universal Windows app targeting a variety of Windows 10 devices, check out Soma's post for details on building Universal Windows apps in VS 2015, refer to Windows Dev Center, or watch videos on how to develop using Visual Studio 2015: Apache Cordova – Build apps for Windows 10 Introduction to Universal Windows Platform Universal Windows Platform Tailored Experiences Universal Windows Apps: coding for different devices Finally, give us feedback on the Cordova Windows 10 platform using the ‘Windows10’ tag. We look forward to hearing from you.   Polita Paulus, Principal PM Manager, Visual Studio Client Team Polita Paulus works on TypeScript, Tools for Apache Cordova, and the JavaScript Language Service. Over the years, she’s worked on several projects, mostly centered around making web developers’ lives a little better. Outside of work, she likes bikes and running.

Posted by on 29 July 2015 | 12:30 pm

Free ebook: Microsoft System Center Building a Virtualized Network Solution, Second Edition

We’re happy to announce the release of our newest free ebook, Microsoft System Center Building a Virtualized Network Solution, Second Edition (ISBN 9780735695801), by Nigel Cain, Michel Luescher, Damian Flynn, and Alvin Morales; Series Editor: Mitch Tulloch. Download all formats (PDF, Mobi and ePub) at the Microsoft Virtual Academy. Below you’ll find a few helpful sections from the Introduction. Enjoy! Introduction According to the Hyper-V Network Virtualization overview at http://technet.microsoft.com/en-us/library/jj134230.aspx, Network Virtualization “provides virtual networks to virtual machines similar to how server virtualization provides virtual machines to the operating system. Network Virtualization decouples virtual networks from the physical network infrastructure and removes the constraints and limitations of VLANs and hierarchical IP address assignment from virtual machine provisioning. This flexibility makes it easy for customers to move to Infrastructure as a Service (IaaS) clouds and efficient for hosters and datacenter administrators to manage their infrastructure while maintaining the necessary multi-tenant isolation, security requirements, and supporting overlapping Virtual Machine IP addresses.” Although the benefits of this approach are very clear, designing and implementing a solution that delivers the promised benefits is both complex and challenging; architects, consultants, and fabric administrators alike often struggle to understand the different features and concepts that make up a solution. Who should read this book? Much of the current published material covering Network Virtualization is focused on the how, the set of tasks and things that you need to do (either in the console or through Windows PowerShell) to set up and configure the environment. In this book, we take a different approach and instead consider the what, with a view to helping private and hybrid cloud architects understand the overall architecture, the role each individual feature plays, and the key decision points, design considerations, and best practice recommendations they should adopt as they begin to design and build out a virtualized network solution using Windows Server and Microsoft System Center Virtual Machine Manager. In summary, this book is specifically designed for architects and cloud fabric administrators who want to understand what decisions they need to make during the design process and the implications of those decisions, what constitutes best practice, and, ultimately, what they need to do to build out a virtualized network solution that meets today's business requirements while also providing a platform for future growth and expansion. New to this second edition are chapters covering the Hyper-V Network Virtualization gateway, designing a solution that extends an on-premises virtualized network solution to an external (hosted) environment, details of how to troubleshoot and diagnose some of the key connectivity challenges, and a look at the Cloud Platform System (CPS) and some of the key considerations that went into designing and building the network architecture and solution for that environment. In writing this book, we assume that, as architects and fabric administrators interested in Microsoft Network Virtualization, you are familiar with and have a good understanding of the networking features and capabilities of Windows Server, Hyper-V, and Virtual Machine Manager, as well as the Microsoft Cloud OS vision available at http://www.microsoft.comen-us/server-cloud/cloud-os/default.aspx. What topics are included in this book? The vast majority of the book is focused on architecture and design, highlighting key design decisions and providing best practice advice and guidance relating to each major feature of the solution. Chapter 1: Key concepts A virtualized network solution built on Windows Server and System Center depends on a number of different features. This chapter outlines the role each of these features plays in the overall solution and how they are interconnected. Chapter 2: Logical networks This chapter provides an overview of the key considerations, outlines some best practice guidance, and describes a process for identifying the set of logical networks that are needed in your environment. Chapter 3: Hyper-V port profiles This chapter discusses the different types of port profiles that are used in Virtual Machine Manager, outlines why you need them and what they are used for, and provides detailed guidance on how and when to create them. Chapter 4: Logical switches This chapter describes the function and purpose of logical switches, which are essentially templates that allow you to consistently apply the same settings and configuration across multiple hosts. Chapter 5: Network Virtualization gateway This chapter outlines key design choices and considerations for providing cross-premises connectivity from networks at tenant sites to virtual networks dedicated (per tenant) in a service provider network. Chapter 6: Deployment This chapter builds on the material discussed in previous chapters and walks through common deployment scenarios, highlighting known issues (and workarounds) relating to the deployment and use of logical switches in your environment. Chapter 7: Operations Even after having carefully planned a virtual network solution, things outside of your immediate control might force changes to your virtualized network solution. This chapter walks you through some relatively common scenarios and provides recommendations, advice, and guidance for how best to deal with them. Chapter 8: Diagnosing Connectivity Issues This chapter looks at how to approach a connectivity problem with a virtualized network solution, the process you should follow to troubleshoot the problem, and some actions you can take to remediate the issue and restore service. Chapter 9: Cloud Platform System network architecture This chapter reviews the design and key decision points for the network architecture and virtualized network solution within the Microsoft Cloud Platform System. To recap, this book is mainly focused on architecture and design (what is needed to design a virtualized network solution) rather than on the actual steps required to deploy it in your environment. Other than in few chapters, you will find few examples of code. This is by design. Our focus here is not to provide details of how you achieve a specific goal but rather on what you need to do to build out a solution that meets the needs of your business and provides a platform for the future. When you have designed a solution using the guidelines documented in this book, you will be able to make effective use of some of the excellent materials and examples available in the Building Clouds blog (http://blogs.technet.com/b/privatecloud/) to assist you with both solution deployment and ongoing management. Acknowledgments The authors would like to thank, once again, our original reviewers Stanislav Zhelyazkov (MVP), Hans Vredevoort (MVP), and Phillip Moss (NTTX) as well as Greg Cusanza, Thomas Roettinger, Artem Pronichkin, and Cristian Edwards Sabathe from Microsoft for providing valuable feedback and suggestions on the content of the book. We would also like to thank and show our appreciation to Nader Benmessaoud, Robert Davidson, Ricardo Machado, Kath McBride, and Larry Zhang (all from Microsoft) for their review, feedback, and comments specific to this second edition. Without their contributions, this book would not be as thorough nor as complete as you find it, so our thanks once again for their time and efforts in making this happen.

Posted by on 29 July 2015 | 12:30 pm