Its all about being Open!

  Why Open? One of the coolest things to happen at Microsoft is embracing and supporting Openness. This is about how we, as a company, collaborate with others in the  industry and how we listen to our customers. Its about the choice that we give to our customers and developers. Right from supporting Linux, Drupal, Java, Hadoop, PHP, NodeJS, HTML5 and Python to extending our support on the cloud through Microsoft Azure, Microsoft has embraced openness. In fact, Microsoft has partnered with 150+ standard bodies, 400+ working groups around the world to ensure that our technology works with every one else’s.   Cloud and Open Technologies Microsoft Azure is open, flexible and a scalable platform which is a great choice for app creation. Azure supports virtual machines on several Linux flavors such as CentOS, Ubuntu, Suse.  Not only does it support open platforms but also open development tools. As already mentioned above the support for the various development tools is pretty exhaustive. For example, look into what we have in store for Azure and PHP at the PHP Development Centre, a rich resource for tutorials and documentation, which will enable you to get started with development on the cloud. Also look into the support for PHP tools for Visual Studio, which provides a well known editor for PHP, HTML/JAVA/CSS support and most importantly integration with Azure itself. How can you get involved with this now? TrueNorthPHP is one of the biggest conferences that hosts the PHP community in Toronto taking place on Nov 6th, 7th & 8th in the Microsoft Campus at Mississauga. The conference showcases some world class speakers talking about various topics such as Clean Application Development, Security and a whole gamut of relevant and interesting topics. One of the most important components of the conference is a hackathon called Azure API Challenge taking place on Day 2, Nov 7th. Again, the details of the event is as follows: Date: Nov 6th, 7th, 8th Venue: Microsoft Canada, 1950 Meadowvale Blvd. Mississauga, ON L5N8L9 Come hack with us, attend some of the best talks and let us celebrate the movement to openness together. You can connect with Mickey (@ScruffyFurn) or me, Adarsha (@AdarshaDatta) anytime for further details.

Posted by on 31 October 2014 | 11:30 am

Pumpkinduino Part 4: The final hardware and software put together

Previous Post: Pumpkinduino Part 3 Ok, I procrastinated, got distracted, then got sick with one of those icky colds that you just can’t shake, so the final product kind is not really all that good.   On the other hand you can pull the parts out of your Arduino pile, since I didn’t use the Galileo in the final product.  Software Well nothing like procrastination to get you into trouble, but I did manage to find some ideas like the “ Pimp your Pumpkin ” by Matt Makes.  I didn’t...(read more)

Posted by on 31 October 2014 | 11:00 am

Using Bing for technical instant answers and automated solutions

Bing has been providing factual instant answers for some time now, but recently they have added "technical" instant answers for questions about Microsoft products or technical support issues. My previous team worked on the content management system that our internal content delivery teams are now using to add technical instant answers to Bing. Here's an example technical instant answer for the "Cortana" search term:  Now that I'm working on support diagnostics and automated solutions again, I have been working with the Bing and content delivery teams to get some instant answers created with links to some of our automated solutions. I'm happy to announce that the first one is now live! So you can search Bing for "Windows Update Troubleshooter" (or a variety of related terms and error messages) and the first result will be a technical instant answer with a link to download and run our automated troubleshooter to fix problems with Windows Update. When you click the link in step 3, you will be prompted to open (or run) or save the troubleshooter. Just click Open (or Run) to launch the troubleshooter. The content delivery teams will be constantly adding more technical instant answers, and we hope to have more live with automated solutions soon!

Posted by on 31 October 2014 | 10:17 am

Workaround: "An unexpected client error has occured"

If you receive the below error while using LCS please try again after clearing the browser cache.   

Posted by on 31 October 2014 | 10:03 am

Sample chapter: The Liskov Substitution Principle

The Liskov substitution principle (LSP) is a collection of guidelines for creating inheritance hierarchies in which a client can reliably use any class or subclass without compromising the expected behavior. This chapter from Adaptive Code via C#: Agile coding with design patterns and SOLID principles explains what the LSP is and how to avoid breaking its rules. After completing this chapter, you will be able to Understand the importance of the Liskov substitution principle. Avoid breaking the rules of the Liskov substitution principle. Further solidify your single responsibility principle and open/closed principle habits. Create derived classes that honor the contracts of their base classes. Use code contracts to implement preconditions, postconditions, and data invariants. Write correct exception-throwing code. Understand covariance, contravariance, and invariance and where each applies. Find the complete chapter here:

Posted by on 31 October 2014 | 10:00 am

I'm back!

Did anybody miss me? :) After a long hiatus from this blog I'm planning to start posting here again. For the past few years I have been working on Microsoft internal content and knowledge management systems, including a KCS verified knowledge management system used to manage the Knowledge Base. Now I'm working on support and self-help diagnostics and automated solutions again. I'm excited to be back in this space, and I'm looking forward to updating you on some of the new customer-facing stuff we're working on. So watch this space for more information (coming soon)...

Posted by on 31 October 2014 | 9:22 am

The case of the file that won't copy because of an Invalid Handle error message

A customer reported that they had a file that was "haunted" on their machine: Explorer was unable to copy the file. If you did a copy/paste, the copy dialog displayed an error. 1 Interrupted Action Invalid file handle ⚿  Contract Proposal Size: 110 KB Date modified: 10/31/2013 7:00 AM Okay, time to roll up your sleeves and get to work. This investigation took several hours, but you'll be able to read it in ten minutes because I'm deleting all the dead ends and red herrings, and because I'm skipping over a lot of horrible grunt work, like tracing a variable in memory backward in time to see where it came from.¹ The Invalid file handle error was most likely coming from the error code ERROR_INVALID_HANDLE. Some tracing of handle operations showed that a call to Get­File­Information­By­Handle was being passed INVALID_FILE_HANDLE as the file handle, and as you might expect, that results in the invalid handle error code. Okay, but why was Explorer's file copying code getting confused and trying to get information from an invalid handle? Code inspection showed that the handle in question is normally set to a valid handle during the file copying operation. So the new question is, "Why wasn't this variable set to a valid handle?" Debugging why something didn't happen is harder than debugging why it did happen, because you can't set a breakpoint of the form "Break when X doesn't happen." Instead you have to set a breakpoint in the code that you're pretty sure is being executed, then trace forward to see where execution strays from the intended path. The heavy lifting of the file copy is done by the Copy­File2 function. Explorer uses the Copy­File2­Progress­Routine callback to get information about the copy operation. In particular, it gets a handle to the destination file by making a duplicate of the hDestination­File in the COPY­FILE2_MESSAGE structure. The question is now, "Why wasn't Explorer told about the destination file that was the destination of the file copy?" Tracing through the file copy operation showed that the file copy operation actually failed because the destination file already exists. The failure would normally be reported as ERROR_FILE_EXISTS, and the offending Get­File­Information­By­Handle would never have taken place. Somehow the file copy was being treated as having succeeded even though it failed. That's why we're using an invalid handle. The Copy­File2 function goes roughly like this: HRESULT CopyFile2() { BOOL fSuccess = FALSE; HANDLE hSource = OpenTheSourceFile(); // calls SetLastError() on failure if (hSource != INVALID_HANDLE_VALUE) { HANDLE hDest = CreateTheDestinationFile(); // calls SetLastError() on failure if (m_hDest != INVALID_HANDLE_VALUE) { if (CopyTheStuff(hSource, hDest)) // calls SetLastError() on failure { fSuccess = TRUE; } CloseHandle(hDest); } CloseHandle(hSource); } return fSuccess ? S_OK : HRESULT_FROM_WIN32(GetLastError()); } Note: This is not the actual code, so don't go whining about the coding style or the inefficiencies. But it gets the point across for the purpose of this story. The Create­The­Destination­File function failed because the file already existed, and it called Set­Last­Error to set the error code to ERROR_FILE_EXISTS, expecting the error code to be picked up when it returned to the Copy­File2 function. On the way out, the Copy­File2 function makes two calls to Close­Handle. Close­Handle on a valid handle is not supposed to modify the thread error state, but somehow stepping over the Close­Handle call showed that the error code set by Create­The­Destination­File was being reset back to ERROR_SUCCESS. (Mind you, this was a poor design on the part of the Copy­File2 function to leave the error code lying around for an extended period, since the error code is highly volatile, and you would be best served to get it while it's still there.) Closer inspection showed that the Close­Handle function had been hooked by some random DLL that had been injected into Explorer. The hook function was somewhat complicated (more time spent trying to reverse-engineer the hook function), but in simplified form, it went something like this: BOOL Hook_CloseHandle(HANDLE h) { HookState *state = (HookState*)TlsGetValue(g_tlsHookState); if (!state || !state->someCrazyFlag) { return Original_CloseHandle(h); } ... crazy code that runs if the flag is set ... } Whatever that crazy flag was for, it wasn't set on the current thread, so the intent of the hook was to have no effect in that case. But it did have an effect. The Tls­Get­Value function modifies the thread error state, even on success. Specifically, if it successfully retrieves the thread local storage, it sets the thread error state to ERROR_SUCCESS. Okay, now you can put the pieces together. The file copy failed because the destination already exists. The Create­The­Destination­File function called Set­Last­Error(ERROR_FILE_EXISTS). The file copy function did some cleaning up before retrieving the error code. The cleanup functions are not expected to alter the thread error state. But the cleanup function had been patched by a rogue DLL, and the hook function did alter the thread error state. This alteration caused the file copy function to think that the file was successfully copied even though it wasn't. In particular, the caller of the file copy function expects to have received a handle to the copy during one of the copy callbacks, but the callback never occurred because the file was never copied. The variable that holds the handle therefore remains uninitialized. This generates an invalid handle error when the code tries to use that handle. This error is shown to the user. An injected DLL that patched a system call resulted in Explorer looking like an idiot. (As Alex and Gaurav well know, Explorer is perfectly capable of looking like an idiot without any help.) We were quite fortunate that the error manifested itself as a failure to copy the file. Imagine if Explorer didn't use Get­File­Information­By­Handle to get information about the file that was copied. The Copy­File2 function returns S_OK even though it actually failed and no file was copied. Explorer would have happily reported, "Congratulations, your file was copied successfully!" Stop and think about that for a second. A rogue DLL injected into Explorer patches a system call incorrectly and ends up causing all calls to Copy­File2 to report success even if they failed. The user then deletes the original, thinking that the file was safely at the destination, then later discovers that, oops, looks like the file was not copied after all. Sorry, it looks like that rogue DLL (which I'm sure had the best of intentions) had a subtle bug that caused you to lose all your data. This is why, as a general rule, Windows considers DLL injection and API hooking to be unsupported. If you hook an API, you not only have to emulate all the documented behavior, you also have to emulate all the undocumented behavior that applications unwittingly rely on. (Yes, we contacted the vendor of the rogue DLL. Ideally, they would get rid of their crazy DLL injection and API hooking because, y'know, unsupported. But my guess is that they are going to stick with it. At least we can try to get them to fix their bug.) ¹ To do this, you identify the variable and set a breakpoint when that variable is allocated. (This can be tricky if the variable belongs to a class with hundreds of instances; you have to set the breakpoint on the correct instance!) When that breakpoint is hit, you set a write breakpoint on the variable, then resume execution. Then you hope that the breakpoint gets hit. When it does, you can see who set the value. "Oh, the value was copied from that other variable." Now you repeat the exercise with that other variable, and so on. This is very time-consuming but largely uninteresting so I've skipped over it.

Posted by on 31 October 2014 | 9:00 am

Project Online Reporting und PowerBI

Nach den Erfahrungen auf der TechEd Europe diese Woche und der Vielzahl an Diskussionen bei uns am Project-Stand, hier nochmal ein kurzes Heads-Up dazu. Wer sich im Rahmen von Project Online mit Reporting beschäftig, sollte unbedingt einen genaueren Blick auf die Funktionen von PowerBI werfen ( Zur Analyse von Daten, gerade in der Komplexität eines Projekt-Portfolios sind die enthaltenen Funktionen sehr hilfreich. Zwei Punkte sind dabei besonders interessant für Nutzer von Excel Services/ Excel Online für Portfolio Reporting. Mit Hilfe von PowerBI lässt sich ein automatisierten Refresh der Datenquellen innerhalb der Reports durchführen, sodass ein Nutzer nicht erst beim Öffnen des Reports lange warten muss, bis er an die aktuellen Daten kommt. Es können im Detail die Datenverbindungen gewählt werden, die einen regelmäßigen Refresh benötigen. Noch spannender wird aber sicher die Zukunft der Datenanalyse, durch Abfrage der verfügbaren Daten mit natürlicher Sprache, anstatt wie bisher durch ggf. komplexe Abfragesyntaxen. Auch dafür bietet PowerBI die Lösung. Zum Ausprobieren kann das Feature im O365 Admin Portal hinzugefügt werden. Aktuell gibt es für die Excel Dokumente ein Größenmaximum von 250 MB. Für die bisher gesehenen Datenmengen sollte diese Beschränkung in der Praxis aber keine Rolle spielen. Also, hier geht's weiter:

Posted by on 31 October 2014 | 8:59 am

Cloud-Dienste für Studierende

Da ich häufig an Hochschulen unterwegs bin und sowohl mit Studierenden als auch Dozenten arbeite, bekomme ich oft die Frage welche Möglichkeiten es gibt die Microsoft Cloud Plattform zu nutzen. Im Folgenden möchte ich kurz auf ein paar Einsatzszenarien eingehen und dann die Angebote für Studierende beschreiben. Microsoft Azure – dein persönliches Rechenzentrum Häufig braucht man sowohl im Studium als auch privat einen Server oder Speicher um Software und Daten im Internet bereit zu stellen. Egal...(read more)

Posted by on 31 October 2014 | 8:32 am

Top 10 Microsoft Developer Links for Friday, August 31, 2014

Visual Studio Toolbox: Load Testing Made Easier Day 1 at TechEd: Recap of TechEd Europe, day one Day 2 at TechEd: Windows for IoT, Azure Stream Analytics and the Future of .NET Pranav Rastogi: ASP.NET Identity 2.2.0-alpha1 Matt Harrington: JavaScript unit testing: using the Chutzpah test runner in Visual Studio Adarsha Datta: Part 6: Get started with Python: Build your first Django Application in PTVS Marius Schulz: Using the IndentedTextWriter Class to Output Hierarchically Structured Data Sergio De Simone: Lock-free Programming in C++ with Herb Sutter Esteban Garcia: Bulk editing MTM 2013 test cases dotNetDave: Using Generic Constraints & Default 2174

Posted by on 31 October 2014 | 8:00 am

Surfing the Tube

As the world’s first and oldest underground railway, the London Underground – affectionately known as the Tube – serves 270 stations, 2 billion annual rides taken and 249 miles of track. It has long been hailed as a model for urban transport as well as a cultural symbol of design and innovation, its slogans and station names splashed across coffee mugs and t-shirts worldwide. Of course, with so many riders the Tube historically suffered from overcrowding at peak hours, forcing temporary closures of stations to accommodate passengers. Enter a real-time monitoring system launched in April 2014 combined with predictive analytics housed in the cloud. Before its launch there was less than a fifty per cent chance that an incident could be located, diagnosed and repaired on the first try; predictive analytics enables the team to ensure that most-frequently needed parts are on hand before something breaks, instead of having to shut down a portion of the station to wait for components. Other parts needed in pairs are being stocked differently, too; all of this adds up to more Londoners on the go, shuttling back and forth between work and play, all built on the invisible cushion of cloud and insights. Transport for London will be one of more than a dozen innovative organisations presenting at Future Decoded on how they’re using technology to meet the challenges of tomorrow. Discover the full line-up of speakers and register to attend this unique, free event on 10th November at ExCel London.

Posted by on 31 October 2014 | 7:58 am

Lync for Mac 2011 14.0.10 が公開されました

こんばんわ、Lync Support のワトソンです。 先日、新しい Lync for Mac 2011 のバージョンが公開されました。 多くの方がお待ちになっていた機能が追加されました。 追加された機能 1. ネットワーク接続が不安定で会議から切断された場合、自動的に再接続されるようになりました。 (Media Resiliancy) 2.  OS X Yosemite に対応しました。 3. 会話履歴が Exchange に保存できるようになりました。 Lync for Mac 2011 のこれまでのバージョンでは会話履歴はローカルでのみ保存がされ、 他の端末からサインインした場合は確認をする事ができませんでした。 こちらのリリースでは以下のイメージの通り、会話履歴のタブができ、 EWS が利用され、Exchange に会話履歴が保存されます。 アップデートの取得およびに今回修正された不具合については以下をご覧ください。 これからも、どんどん機能が追加される予定ですので、お楽しみに!  

Posted by on 31 October 2014 | 7:19 am

Code Recipe モニターキャンペーン開始

#wpdev_jp #win8dev_jp Visual Studio 2013 を、コードサンプルを通じて体感していただくプロジェクトが11月松からスタートしますが、その前にちょっと先出しでモニターキャンペーンを実施します。 ということで、ご参加いただける方は上記サイトにアクセスして、こちらで紹介されているおすすめレシピを試してみて、ご応募ください。ご参加いただいた方の中から抽選で 50 名様に特典を進呈しますが、その応募先はレシピを完了した際に表示されます。 まぁ、ちょっと調べればわかるでしょうけどね、そんな無粋なことは言わずにせっかくだから楽しんでみてください。レシピはWeb アプリ版と Windows アプリ版の2種類を用意してあります。好みに合わせてお好きな方をお選びくださいw 皆様のご応募をお待ちしております。 いや、Windows 版のほうは本気でちゃんと使える技術を書いてみましたわ。手間かもしれないですが、ぜひ試してみてください。

Posted by on 31 October 2014 | 7:00 am

Kinect With Me Part 2 - Tracking Bodies and Overlaying Shapes on Infrared Data

This Kinect for Windows v2 tutorial series will help you build an interactive Kinect app using Visual Studio and C#. You are expected to have basic experience with C# and know your way around Visual Studio. Throughout the series, you will learn how to set up your Kinect for Windows v2 sensor and dev environment, how to track skeletons and hand positions, how to manipulate the data from the colour and infrared sensors and the microphone array, how to recognize hand gestures, and how to put it all together in a deployable Windows app. Level: Beginner to Intermediate If you have not yet done so, I recommend you begin with Part 1: Setting Up Your Machine. Another great resource is Getting Rolling with Kinect for Windows v2 SDK, a post by Microsoft Student Partner Louis St-Amour.   Welcome to part 2 of the Kinect With Me Kinect for Windows development series! Last time we set up our dev environments for Kinect for Windows development. Today we'll start getting into the code with an intro to body tracking and overlaying shapes on the body in real time. By the end of this tutorial, you will know how to recognize bodies and overlay images that move with the skeleton. To start, we'll keep it easy and use the infrared sensor so we don't need to deal with colour. By the end of this tutorial we will have an app that displays the sensor's IR feed with a red circle on the head of the bodies in view.   1) Add a Canvas for the Infrared Feed Within the main Grid on MainPage.xaml, add an Image and a Canvas. For now we'll use 512x424 dimensions because that is the resolution of the IR sensor.   <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <Image Name="image" Width="512" Height="424"/> <Canvas Name="bodyCanvas" Width="512" Height="424"/> </Grid>   2) Initialize the Sensor and Related Body Data Structures Click on over to the MainPage.xaml.cs file, where we'll give our app some eyes. The very first thing you want to do is add WindowsPreview.Kinect: using WindowsPreview.Kinect;   Then we'll set up the objects we need: KinectSensor sensor; InfraredFrameReader irReader; ushort[] irData; byte[] irDataConverted; WriteableBitmap irBitmap; Body[] bodies; MultiSourceFrameReader msfr;   And initialize them on MainPage_Loaded: void MainPage_Loaded(object sender, RoutedEventArgs e)         {             sensor = KinectSensor.GetDefault();             irReader = sensor.InfraredFrameSource.OpenReader();                       FrameDescription fd = sensor.InfraredFrameSource.FrameDescription;             irData = new ushort[fd.LengthInPixels];             irDataConverted = new byte[fd.LengthInPixels * 4];             irBitmap = new WriteableBitmap(fd.Width, fd.Height);             image.Source = irBitmap;             bodies = new Body[6];             msfr = sensor.OpenMultiSourceFrameReader(FrameSourceTypes.Body | FrameSourceTypes.Infrared);             msfr.MultiSourceFrameArrived += msfr_MultiSourceFrameArrived;             sensor.Open();         } Essentially what we're doing here is setting up the infrared reader and setting the image (that we made in our xaml file) to the infrared data. We set it up in a WriteableBitmap so it is properly formatted, and set image.Source equal to that WriteableBitmap. Then we initialize the bodies array, which will store the Body objects identified by the sensor. You'll notice that we give the Body array a size of 6 because the Kinect v2 can track up to six bodies simultaneously. Keep in mind that, while six bodies can be tracked, only two of those bodies can have recognized hand states at any given time (but we'll deal more with that later). Finally, we open a MultiSourceFrameReader on the sensor and create a method, msfr_MultiSourceFrameArrived, to handle each frame that arrives in the reader.   3) Handle Frames and Display Them in the App Now, let's write the FrameArrived method for the MultiSourceFrameReader msfr. This will take the frames acquired by the reader and, for each frame called irFrame, convert it into a usable shade and copy it into irBitmap to be displayed in the image frame. The infrared sensor represents the presence of bodies by integers of 1-5 representing the distance between the sensor and the part of the body at that pixel. In the following code we set up our frames and display the Kinect's infrared view in the image part of our app. void msfr_MultiSourceFrameArrived(MultiSourceFrameReader sender, MultiSourceFrameArrivedEventArgs args)         {         using (MultiSourceFrame msf = args.FrameReference.AcquireFrame())             {                 if (msf != null)                 {                     using (BodyFrame bodyFrame = msf.BodyFrameReference.AcquireFrame())                     {                         using (InfraredFrame irFrame = msf.InfraredFrameReference.AcquireFrame())                         {                             if (bodyFrame != null && irFrame != null)                             {                                 irFrame.CopyFrameDataToArray(irData);                                 for (int i = 0; i < irData.Length; i++)                                 {                                     byte intensity = (byte)(irData[i] >> 8);                                     irDataConverted[i * 4] = intensity;                                     irDataConverted[i * 4 + 1] = intensity;                                     irDataConverted[i * 4 + 2] = intensity;                                     irDataConverted[i * 4 + 3] = 255;                                 }                                 irDataConverted.CopyTo(irBitmap.PixelBuffer);                                 irBitmap.Invalidate();                             }                         }                     }                 }             }         }   Now when you run your app you should see something like this (but with you in the frame instead of me!): Great! Now the Kinect sees you…but does it recognize you as a body? Let's find out!   4) Overlay Shapes On Tracked Bodies In the deepest if statement, after the line irBitmap.Invalidate();, we'll add code to display a red circle over the heads of the bodies in view. Pay close attention to how we position the circle, 25px in from the left and top of the head, so it is centered on the head. If you want, play around with this step a bit and share your screenshots on Twitter with the hashtag #kinectwithme (tweet it at @theTrendyTechie and @cdndevs)! The most creative one gets a shoutout in Kinect With Me Part 3. First we add: using Windows.UI.Xaml.Shapes; using Windows.UI; Then we can write the fun stuff: bodyFrame.GetAndRefreshBodyData(bodies); bodyCanvas.Children.Clear(); foreach (Body body in bodies)     {        if (body.IsTracked)           {            Joint headJoint = body.Joints[JointType.Head];            if (headJoint.TrackingState == TrackingState.Tracked)               {               DepthSpacePoint dsp = sensor.CoordinateMapper.MapCameraPointToDepthSpace(headJoint.Position);               Ellipse headCircle = newEllipse(){Width = 50, Height = 50, Fill = newSolidColorBrush(Color.FromArgb(255,255,0,0))};               bodyCanvas.Children.Add(headCircle);               Canvas.SetLeft(headCircle, dsp.X - 25);               Canvas.SetTop(headCircle, dsp.Y - 25);               }           }      }   Now when you launch your app, you should see that your face is now covered by a red circle (move back if you don't see it at first, you may be too close to the sensor). If it is, congratulations! You've successfully learned how to track bodies using Kinect, and overlay shapes to convey this to the user. Conveying a successful connection is very important for the user experience of Kinect apps. It is not just enough for the app to work; the user must know that it works. Before we added the circle we had no idea whether our bodies were being tracked or recognized at all. Adding the circle conveyed to us when things were working as expected. Summary: Today we learned how to start tracking bodies using the infrared sensor in our app, and how to overlay shapes (or images, or text, for that matter!) on a specific part of the body, that will move with the body as it is tracked. This tutorial was partially inspired by the MVA course by Ben Lower and Rob Relyea, Programming Kinect for Windows v2 Jump Start. They provide a great introduction, and much of the code you see here is very similar to the code in Part 2 of their course. Starting with next week's tutorial we will deviate from their course and start introducing more concepts. Thanks for Kinecting with me! See you next Thursday for Part 3 of this tutorial series. Don't forget to tweet at @theTrendyTechie and @cdndevs with the hashtag #kinectwithme to share your screenshots and questions!     *************** Sage Franch is a Technical Evangelist at Microsoft and blogger at Trendy Techie.    

Posted by on 31 October 2014 | 6:30 am

#retosMSDN: Reto 5 – Extendiendo funcionalidad en C#

¡Ya tienes aquí el quinto de nuestros #retosMSDN! Aunque en realidad no se trata de un reto, si no de 4 mini-retos independientes entre sí. Además, tenemos dos entradas para el Codemotion 2014 que necesitan dueño, y que serán para los dos primeros desarrolladores de España que nos enviéis la solución correcta a los 4 mini-retos.   El Reto Antes de empezar, en este proyecto de Visual Studio 2013 que puedes descargarte de GitHub encontrarás los tests unitarios con los que puedes verificar que tu implementación es correcta, y que son mencionados en cada uno de los puntos que verás a continuación. Necesitamos que implementes lo siguiente: 1) Duration y su método From, de manera que pase los tests unitarios de UnitTestDuration.cs. 2) El método NotNull, que comprobará si un objeto de cualquier clase es nulo o no, y que pase los tests unitarios de UnitTestNotNull.cs. 3) Un diccionario DictionaryPlus al que le podamos pasar como índice el conjunto de claves cuyos valores queramos obtener y que nos devuelva una enumeración con dichos valores, y que pase los tests unitarios de UnitTestDictionaryPlus.cs. 4) El método ToUpperNoCopy, que convierta todos los caracteres de cualquier string a mayúsculas, y que pase los tests unitarios de UnitTestToUpperNoCopy.cs.   Recuerda, no estamos haciendo TDD. Los tests proporcionados son una ayuda, y además de pasarlos habrá que cumplir también cualquier requisito adicional especificado en este artículo, y que será verificado manualmente.   La Solución ¿Ya has resulto alguno de los mini-retos? Comparte con nosotros tu solución de Visual Studio en ¡No hace falta que esperes a resolverlos todos! El viernes de la semana que viene, el día 7 de noviembre, publicaremos la solución que nosotros proponemos para los 4 mini-retos, y los ganadores de las entradas al Codemotion.   ¿Sabías que… …si tienes Visual Studio instalado puedes encontrar las especificaciones de C# en un documento en formato Word dentro de la carpeta VC#\Specifications que hay en la carpeta de Visual Studio de Program Files (o Program Files (x86) en máquinas de 64 bit)? Yo por ejemplo tengo aquí el documento de C# 5.0: C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC#\Specifications\1033\CSharp Language Specification.docx   Si tienes cualquier duda o problema durante la resolución del reto, o si quieres proponer tu propio reto para retar al resto de la comunidad, no dudes en ponerte en contacto con nosotros. Un saludo, Alejandro Campos Magencio (@alejacma) Technical Evangelist PD: Mantente informado de todas las novedades de Microsoft para los desarrolladores españoles a través del Twitter de MSDN, el Facebook de MSDN, el Blog de MSDN y la Newsletter MSDN Flash.

Posted by on 31 October 2014 | 5:38 am

Quatrième correctif cumulatif pour System Center 2012 R2 Operations Manager

L’UR4 ou Update Rollup 4 pour System Center 2012 R2 Operations Manager est disponible à partir de l’article kb2992020 de la base de connaissances de Microsoft. Plus globalement, une bonne partie des composants de System Center 2012 R2 reçoit également des UR4 tel que décrit depuis l’article kb2992012.

Posted by on 31 October 2014 | 5:04 am


皆さん、こんにちわ。日本マイクロソフトのエバンジェリストの渡辺です。 最終成果発表会に参加された学生の皆さん 2013年に締結された「横浜市及び横浜市教育委員会と日本マイクロソフトの連携」に基づいて、今年も横浜市立高等学校の学生さんを対象にアプリ開発講座を開催させていただきました。講座は6月にスタートし、10月まで5ヶ月間に、全10回シリーズで行いました。昨年は、横浜サイエンスフロンティア高等学校の学生さんが対象でしたが、今年は、横浜市立の全高等学校に応募枠を広げて、東高等学校、桜丘高等学校、戸塚高等学校などの高校からの学生さんにも参加いただきました。 参加学生のプログラミングスキルは、バラバラで、今回が全く初心者という学生さんもいましたので、昨年の講義を基にして、マイクロソフトディベロップメントの田中賢一郎さんが執筆した「ゲームを作りながら楽しく学べる HTML5 + CSS + JavaScript プログラミング」をテキストに使用しました。田中さんには、昨年に続いて、講座のサポートもいただきました。講習場所は、ICT環境(自由にプログラミング講習を行える開発環境)が整っている横浜サイエンスフロンティア高等学校にて実施させていただきました。 学生向けのプログラミング講習やワークショップを行う上で、一番問題となるのは、ICT環境です。学生が一人一台、自由に使える開発環境(パソコン)が整備されており、そこに最新のOSや最新の開発ツールが常にインストールされている。そして、現在の学生さんは、ネット検索で不明点を調べ、情報を入手していきますので、快適なインターネット(ネットワーク)環境が整っていること。横浜市の高等学校は、マイクロソフトの教育機関向けライセンスや、開発環境を無償で構築できるDreamSparkを積極的に活用いただき、ICT環境を整備いただいています。 全10回の講座は、前半で、HTML、CSS、JavaScript、Canvas、ストアアプリ開発の基本などの講義を行い、後半は、5チームに分かれて、それぞれのチームで、作成したいアプリ(ユーティリティやゲーム)を自由に開発してもらいました。上述した自由に利用できるICT環境が揃っていて、プログラミングの基本的な概念を伝えれば、現在の学生さんは、不明な点は自分でネットで調査して、どんどんレベルを上げていきます。最終的には、全5チームとも、アプリとして形にしてもらいました。10/29には、日本マイクロソフトの品川オフィスに来社いただき、最終成果発表会を行いました。発表いただいたアプリは、ゲーム3作品(疑似3D迷路を含む3DRPG、非常に完成度の高い2DアクションRPG、タワーディフェンスゲーム)、ユーティリティ2作品(Twitterクライアント、鉄道運行情報アプリ)です。 作成したアプリを発表する学生さん 当日は、NHKの取材もあり、首都圏ネットワークニュースで放映がありました。映像は、こちらをご覧ください。=> 成果発表会には、マイクロソフトディベロップメント株式会社の社長であり、日本マイクロソフト株式会社の最高技術責任者である加治佐俊一に同席いただき、、参加した学生さんに、プログラミングスキルの重要性を伝えていただきました。 「ゲームを買ってプレイするだけでなく、実際に作ってみること。アプリをダウンロードして利用するだけでなく、自分で作ってみること。」 アメリカの や、イギリスの義務教育へのプログラミング教育導入など、プログラミングスキルを身につけた人材の輩出に向けて、世界各国で大きな動きがあります。マイクロソフトも世界中で、そのような動きに賛同し協力を行っています。未だに、学校教育へのICT環境の導入の是非を議論している日本は、周回遅れというのが実情だと思いますが、世界の流れに取り残されないために、このような学生デベロッパーの支援を今後とも続けていければと思います。

Posted by on 31 October 2014 | 4:36 am

[Sample Of Oct. 31] How to delete entity from Windows Azure Table storage in a DropDownList Control

Oct. 31 Sample : Deleting or retrieving an Azure Table Entity needs both partition key and row key. But if you use ASP.NET, you may not store the entire entity information like WPF, so after you bind the table entities to a data binding control, you may lose its partition key and row key when page is posted back. So this code snippet will show you how to bind table entity’s data to a dropdown...(read more)

Posted by on 31 October 2014 | 4:27 am

如何在Windows Store应用中使用Windows Azure的媒体服务(二)

在上一篇文章里,我们已经了解了如何使用REST API将多媒体数据上传给Azure的媒体服务,但这只是万里长征走完了第一步,为了能够使用多媒体数据,我们需要使用Azure强大的处理功能将数据编码成我们合适的格式,然后分发出去,在这一篇里我们就来讨论多媒体数据编码的问题。 使用用REST API,我们可以将任意格式的文件上传到服务器上,但是为了使媒体数据能够正确的发布,Azure Media需要安装了合适的编码器来做编解码的工作。Azure Media上已经内置了缺省的编码器,可以支持市面上主流的流媒体格式,同时,第三方也可以为Azure Media开发自己的编码器。 上一篇文章中我们已经把媒体文件上传到Azure Media服务器上,但是如果需要encoder的话,这个上传还是不完全的,我们需要在上传之前建立一个Asset文件: private async Task<string> CreateAssetFile(string accessToken, string assetId) {     var request = (HttpWebRequest)HttpWebRequest.Create("");     request.Method = "POST";     request.ContentType = "application/json;odata=verbose";     request.Accept = "application/json;odata=verbose";       string requestbody =     "{\"Name\":\"test.wmv\", \"ContentFileSize\":\"0\",\"MimeType\":\"video/x-ms-wmv\",\"ParentAssetId\":\"" + assetId + "\"}";       request.Headers["DataServiceVersion"] = "3.0";     request.Headers["MaxDataServiceVersion"] = "3.0";     request.Headers["x-ms-version"] = "2.7";     request.Headers["Authorization"] = "Bearer " + accessToken;       var requestBytes = Encoding.UTF8.GetBytes(requestbody);         var requestStream = await request.GetRequestStreamAsync();     await requestStream.WriteAsync(requestBytes, 0, requestBytes.Length);     await requestStream.FlushAsync();       var response = await request.GetResponseAsync();     var responseStream = response.GetResponseStream();     var stream = new StreamReader(responseStream);       var returnBody = stream.ReadToEnd();     JObject responseJsonObject = JObject.Parse(returnBody);     var d = responseJsonObject["d"];     return d.Value<string>("Id"); } 然后在上传之后更新这个Asset文件信息: private async Task MergeAssetFile(string accessToken, string fileId, string assetId, ulong size) {     var request = (HttpWebRequest)HttpWebRequest.Create("'"+fileId+"')");     request.Method = "MERGE";     request.ContentType = "application/json;odata=verbose";     request.Accept = "application/json;odata=verbose";       string requestbody = "{\"ContentFileSize\":\""+size+"\",\"MimeType\":\"video/x-ms-wmv\",\"Name\":\"test.wmv\",\"ParentAssetId\":\"" + assetId + "\"}";                 request.Headers["DataServiceVersion"] = "3.0";     request.Headers["MaxDataServiceVersion"] = "3.0";     request.Headers["x-ms-version"] = "2.7";     request.Headers["Authorization"] = "Bearer " + accessToken;       var requestBytes = Encoding.UTF8.GetBytes(requestbody);         var requestStream = await request.GetRequestStreamAsync();     await requestStream.WriteAsync(requestBytes, 0, requestBytes.Length);     await requestStream.FlushAsync();       var response = await request.GetResponseAsync();   } 这样才算让Azure Media真正的得到上传的媒体文件的信息。接下来就可以进行编码工作了。我们首先获得一个编码处理单元,这里我们选择的处理单元是Windows Azure Media Encoder: public async Task<String> GetProcessorId(string accessToken) {     HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("$filter=Name%20eq%20'Windows%20Azure%20Media%20Encoder'");     request.Method = "GET";                request.ContentType = "application/json;odata=verbose";     request.Accept = "application/json;odata=verbose";     request.Headers["DataServiceVersion"] = "3.0";     request.Headers["MaxDataServiceVersion"] = "3.0";     request.Headers["x-ms-version"] = "2.5";     request.Headers["Authorization"] = "Bearer " + accessToken;                 WebResponse response = await request.GetResponseAsync();     Stream responseStream = response.GetResponseStream();     StreamReader stream = new StreamReader(responseStream);                           var returnBody = stream.ReadToEnd();     JObject responseJsonObject = JObject.Parse(returnBody); var results = responseJsonObject["d"]["results"][0];     return results.Value<string>("Id");                        } 在得到了编码处理单元以后,我们就可以创建一个编码任务了: public async Task<String> CreateEncodeJob(string jobname, string assetId, string processorId, string accessToken) {                           var assertURI = "'" + assetId + "')";       HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("");     request.Method = "POST";     request.ContentType = "application/json;odata=verbose";     request.Accept = "application/json;odata=verbose";                             string name = "UploadPolicy" + DateTime.UtcNow.ToString("s") + "Z";     string dura = "300";     int permission = 2;     String requestbody = "{ \"Name\" : \"" + name + "\"," +                             " \"DurationInMinutes\" : \"" + dura + "\", " +                             " \"Permissions\" : " + permission + "}";     requestbody ="{\"Name\" : \"" + jobname +  "\"," +                     " \"InputMediaAssets\" : [{\"__metadata\" : {\"uri\" : \"" + assertURI + "\"}}]," +                     " \"Tasks\" : [{\"Configuration\" : \"H264 Smooth Streaming 720p\"," +                     " \"MediaProcessorId\" : \"" + processorId + "\"," +                     " \"TaskBody\" : \"<?xml version=\\\"1.0\\\" encoding=\\\"utf-8\\\"?><taskBody><inputAsset>JobInputAsset(0)</inputAsset><outputAsset>JobOutputAsset(0)</outputAsset></taskBody>\"}]}";         request.Headers["DataServiceVersion"] = "3.0";     request.Headers["MaxDataServiceVersion"] = "3.0";     request.Headers["x-ms-version"] = "2.2";     request.Headers["Authorization"] = "Bearer " + accessToken;                 var requestBytes = Encoding.UTF8.GetBytes(requestbody);       var requestStream = await request.GetRequestStreamAsync();     await requestStream.WriteAsync(requestBytes, 0, requestBytes.Length);     await requestStream.FlushAsync();       var response = await request.GetResponseAsync();     var responseStream = response.GetResponseStream();     var stream = new StreamReader(responseStream);       var returnBody = stream.ReadToEnd();     JObject responseJsonObject = JObject.Parse(returnBody);     var d = responseJsonObject["d"];     return d.Value<string>("Id"); } 这里我们选择将媒体文件编码为H264 Smooth Streaming 720p。该流媒体格式是主要针对Windows平台的,Azure Mobile也提供了跨平台的支持,比如说HLS或者MPEG-DASH。 在创建完成编码任务以后,Azure Media就开始在后台进行编码工作了。您可以等待一会儿,或者在你需要的时候查询编码状态,如果编码完成的话就换返回状态值3.首先我们实现查询编码状态的函数: public async Task<int> GetJobState(string Id, string accessToken) {     String uir = "'" + Id + "')/State";       HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(uir);     request.Method = "GET";     request.ContentType = "application/json;odata=verbose";     request.Accept = "application/json;odata=verbose";       request.Headers["DataServiceVersion"] = "3.0";     request.Headers["MaxDataServiceVersion"] = "3.0";     request.Headers["x-ms-version"] = "2.5";     request.Headers["Authorization"] = "Bearer " + acsBearerToken;       WebResponse response = await request.GetResponseAsync();     Stream responseStream = response.GetResponseStream();     StreamReader stream = new StreamReader(responseStream);                 var returnBody = stream.ReadToEnd();     JObject responseJsonObject = JObject.Parse(returnBody);                 var d = responseJsonObject["d"];     return d.Value<int>("State"); }   然后我们在前一篇文章的Button_Click事件处理函数中添加代码如下: private async void Button_Click(object sender, RoutedEventArgs e) {     StorageFolder library = Windows.Storage.KnownFolders.VideosLibrary;     var video = await library.GetFileAsync("test.wmv");       string accessToken = await GetAccessToken();       string assetId = await CreateAsset(accessToken);     string fileId = await CreateAssetFile(accessToken, assetId);     string policyId = await UpdateAccessPolicy(accessToken);     string containerName = await GetContainerName(policyId, assetId, accessToken);       //use storage API to upload     StorageCredentials credentials = new StorageCredentials(storageName, storageKey);       var storageAccount = new CloudStorageAccount(credentials, false);     var cloudBlobClient = storageAccount.CreateCloudBlobClient();       var container = cloudBlobClient.GetContainerReference(containerName);     await container.CreateIfNotExistsAsync();       CloudBlockBlob blockBlob = container.GetBlockBlobReference("Wildlife.wmv");       await blockBlob.UploadFromFileAsync(video);       var bp = await video.GetBasicPropertiesAsync();       await MergeAssetFile(accessToken, fileId, assetId, bp.Size);       var processorId = await GetProcessorId(accessToken);     var Id = await CreateEncodeJob("testJob", assetId, processorId, accessToken);       int jobState = 0;     while (jobState != 3)     {         jobState = await GetJobState(Id, accessToken);         await Task.Delay(TimeSpan.FromSeconds(2));     } } 这里我们通过轮询编码任务的状态来等待任务结束,为了避免等待过于频繁,我这边设定每次轮询时间为5秒。 这样我们已经完成了媒体文件的编码了,执行上述代码,当任务完成后,我们可以到Azure Portal上查看结果,看到这个媒体文件的编码已经成功了。   在这一篇中,我们实现了对媒体文件的编码,后面我还会继续讲如何发布编码后的流媒体,尽请关注。  

Posted by on 31 October 2014 | 4:19 am

Halloween Costume with Arduino

This Halloween me and my daughter decided to add some dazzle to her fairy costume. Since we were anyway learning to code on Arduino we decided to dip our hands in wearables. The basic idea is to build a costume that glows when someone comes close. The project was intended to teach a 9 year old to code and is hence simple enough for her to grasp. We used the following Parts Arduino UNO board TIP120 transistor Diode 1N4004 1K Resistor HC-SR04 Ultrasonic Range Finder Circuit It’s best to consider the circuit as two separate pieces. One to acquire the distance of someone approaching using the HC-SR04 ultrasound range finder. The second is to actually make the LED strip glow. The first part consists of connecting the 4 pins of the HC-SR04 as follows We cannot simply drive the LED strip using an output pin of Arduino because the strip drains way more current than that can be supplied by the Arduino chip. So we use a TIP120 or TIP121 chip as shown below There is a nice explanation of this whole setup at The same principles hold, but instead of a fan we use a LED strip in our case. Code The entire code is available on GitHub at (I cleaned up the code a tiny bit after my daughter wrote it). This is how it looks #include <ultrasonicranging.h> #define ECHO_PIN 2 // ECHO pin of HC-SR04 #define TRIG_PIN 3 // Trigger pin of HC-SR04 #define LED_OUT 5 // Drive LED (Base pin of TIP120 const int space = 125; // Distance in cm in which to trigger LED void setup() { Serial.begin (9600); pinMode(TRIG_PIN, OUTPUT); // trigger pin of US range finder pinMode(ECHO_PIN, INPUT); // Echo pin of US range finder pinMode(LED_OUT, OUTPUT); // base of TIP120 to drive LED analogWrite(LED_OUT, 0); } void GlowLed() { // Slowly get from LED strip off to full bright (glow-in) for (int brightness = 0; brightness < 255; brightness++) { analogWrite(LED_OUT, brightness); delay(3); } // Slowly get from LED strip on to full off (glow-out) for (int brightness = 255; brightness >= 0; brightness--) { analogWrite(LED_OUT, brightness); delay(3); } } void loop() { int distance = GetDistanceInCm(TRIG_PIN, ECHO_PIN); Serial.println(distance); if (distance <= 0 || distance > space) { analogWrite(LED_OUT, 0); delay(500); return; } if (distance <= space) { GlowLed(); } } Here to abstract away the intricacies of how distance is received from the ranger, I have used GetDistanceInCm. The source for this library is at Once we tested out the circuit we went ahead and soldered it on a board. My daughter did receive a battle scar (a small burn from iron) but we battled on. This is how it looks partially done  With my wife’s help we sewed it underneath her fairy dress. It was pretty well concealed other than the sensor sticking out a bit.

Posted by on 31 October 2014 | 4:04 am


[原文发表地址] Explanation of July 18th outage [原文发表时间] 2014/7/31 首先要说一句对不起,我花了一周半的时间才写好这篇博客。 在7月18号周五的一段时间内,多数的VS的认证都被中断了。所有服务大约中断了90分钟。幸运的是那段时间只有少部分人在使用,因此受到影响的客户的要比预想的少很多,但是我知道这只是对那些受到影响的人的小小的安慰而已。 我主要想表达的是我们从这次事故中学习到一些东西,从而能使我们的服务能更出色,并且分享这次的事件,希望能够让一其它人避免类似的错误。 发生了什么? 这次断电的根本原因是因为在SQL云端的一个数据库变得很慢。实际上我不知道为什么会这样,因此我猜测那个不是真正的根本原因,但是对我来说,这个很接近了。我相信SQL Azure 团队追踪的是根本原因的一部分-这件事情并没有对他们产生影响。数据库会时不时变得很慢,在过去的一年多的时间,SQL Azure 做的已经相当不错了。 具体的情况是Visual Studio中(在IDE)在“调用服务平台”(一种平常的服务实例管理身份,用户配置文件,许可证之类的)建立获得通知有关更新漫游设置的连接。共享平台服务会调用的Azure服务总线,它会调用状况不佳的SQL Azure数据库。 这个运行缓慢Azure数据库引起的调用共享平台服务(SPS)会累积直到在SPS中所有的操作完成之后,同时,因为同样依赖于SPS,TFS的操作也会被阻塞住。最终的结果是在线的VS服务停掉了,直到我们人工断开与Azure 服务总线的链接,同时清理它自身的日志。 我们可以从这件事当中学习到很多。这其中的一些我已经知道了原因,还有一些我还并不知道,但是不管我清楚不清楚它们的根本原因,这都是一个有趣的、具有启发作用的错误。 **更新**在最开始的10分钟,我的团队的好多人已经联系我说这个根本原因可能是应为Azure 数据库引起的。实际上,我的这篇文章想说的是不管根本原因是什么。在复杂的服务当中肯定会发生一些瞬时的错误,重要的是你的反应是什么。所以,不管这个是什么引起的,这次断电的”根本原因”是我们没有正确的处理瞬时错误,并且让它发展成为一个总的服务断电的事故。我同样说过,关于SB/Azure 数据库我可能说错了。我尽量避免说一些关于其他服务发生的事情,因为这是一件很危险的事情。我不会花时间去确认并纠正任何的错误,因为,这不是要进行一个相关的讨论。这篇文章不是关于是什么引起的这次的事故,这篇文章是关于我们如何应对类似的事情,和我们将会采取什么样的措施来在将来更好的解决类似的问题。 不要让一个可有可无的功能影响你主要的任务 第一个并且最重要的教训是”不要让一个可有可无的功能影响你主要的任务“。在服务中有一个概念是所有的服务应该是低耦合并具有容错性。一个服务停掉了不应该引起大量的失败出现,引起其他的服务失败也应该是因为功能部分完全依赖的失败的组件不可用。像谷歌和必应在这个方面就做的非常好,他们是有成千的服务器,任何一个服务器都有可能停掉,但是你根本觉察不到,因为大多的用户体验还是像原来好的时候一样。 这次特殊的事情是因为Visual Studio 漫游设置功能体验失败。如果我们有一个正确的方法遏制这次事件,漫游设置就不会在90分钟内尝试同步,其他的东西也应该顺畅工作,这看起来不是一个多严重的问题。然而事实是,整个服务都停掉了。 在我们这次事件当中,我们所有的服务都是在其它服务当中处理当前失败,但是因为在一个关键的服务中该错误最终被存储在一个已经干枯的线程库中。当到那个点之后,所有的服务都不能进行任何工作。 更小的服务器会更好   这个问题的一部分原因是因为主要的服务器像是我们的认证服务器和一些不重要的服务器(漫游服务器)共享了同一个容易耗尽的资源(资源池)。另外的一些主要的服务器被尽可能的分解成小的工作单元。这些工作单元运行的时候可能会有一些常见的故障,所有的互动应履行“防御性编程”的做法。如果我们的认证服务器出现故障,那么我们的服务关闭。但漫游设置服务器就不会停掉。在过去的18月左右的时间当中我们一直在努力逐步重构与VS联机成一组松耦合的服务器。事实上,大约一年前,在现在的SPS被从TFS中分离出来成一个单独的服务。总而言之,今天我们有15个左右的独立服务器。显然,我们还需要更多的:) 你应该重试多少次? 另外服务当中一些存在已久的规则会视那些短暂的失败是“正常的”。每一个服务器都会消耗另外的服务器的时候对丢包现象,瞬时的延迟,背压流量控制等等都是很宽容的。这其中的早期技术是当你使用的服务失败的时候重试一下。这样看起来没什么问题。有趣的是我们运行在一个一组级联重试的情况,具体的情况是: Visual Studio->SPS->服务总线->Azure 数据库 当Azure数据库故障时服务总线重试3次,服务总线故障时SPS重试2次,SPS故障时VS重试3次。3*2*3=18 次。所以,每一个Visual Studio 客户端打开之后,在这段时间内,SQL Azure数据库会推送18次尝试。因为这个问题,数据库运行缓慢(导致时间超市30秒左右),18次尝试*30秒=每个9分钟。 在堆栈中所有的调用堆积起来,并且直线上升,直到最后,线程池已满,没有更多的请求能被处理。 事实证明,SQL Azure 对使用者来说是非常容易沟通的,不管如何,重试是值得的。SB并不重视这一点,也不传达给使用者。SPS同样也不会。因此,我学到的新的规则是,依据错误来仔细检查服务是很重要的,而不要去判断服务和服务之间失败重练次数以及服务之前连接是否合理 . 如果已经做了,每个链接只是链接30秒而不是9分钟,这样的话,情况会好很多的。 彻底解决阻塞还要等待很长时间 想象一下,SPS保存了当前有多少并发呼叫进入服务器的总线数,直到那个是一个低优先级的服务,并且呼叫都是同步进行的,线程池是有限的,它决定呼叫并发数超过某个阈值(为了方便讨论各种情况,假设是30),在阻塞缓解一些之前,它就会拒绝所有新的申请呼叫。 有些设置不是漫游的呼叫会被很快的拒绝,但是我们从来没有用尽线程来时高优先级的服务继续良好地运行。假设客户端设置成尝试一些非常罕见的间隔重新连接时,同时假设底层数据已经被清理掉,系统会自己进行修复。 线程,线程,更多的线程 我敢肯定,没有别人帮忙指出这次事件的根本原因之一是跨服务器的呼叫时同步进行的,我是不可能弄清楚的。它们本应该是异步进行的,因此不会占用一个线程,所以线程池是不会枯竭的。这个是非常合理的,但是我这不是我现在最重要的事情。你们总会消耗一些内存资源,即便是异步呼叫。这个资源可能很大,但并不是取之不尽的。我上面列出的技术不管是对同步通信或者是异步通信都是非常有价值的,同时能够防止其他的不好的事情,像是对一个本已经十分脆弱的数据库进行过多重试操作等。 所以,这是一个很好的想法,但是我并不认为这是一个好的解决方案。 所以,像我们之前发布的一系列的如何改进“基础设施”文章,将帮助我们找到一种更加可靠的服务方式。所有的软件都可能会因为不知名的原因挂掉。关键的是要检查每一个故障点,追踪所有故障的根本原因,归纳经验教训,并建立更好的防御措施。 对于因为我们的原因引起的中断我感到十分抱歉,我不能保证类似的事情不会再发生。*但是*在几个星期(我们进行了一些防御的措施)调查修复之后,这样的情况就不会因为同样的原因再次发生。 感谢您一如既往的加入到我们的行列中来进行这次的发布,并且给予我们充分的理解。希望这些经验教训可以对您和您自己的开发工作提供一定的参考价值。 Brian

Posted by on 31 October 2014 | 3:59 am

XAML マスコット素材

#マスコットアプリ文化祭 #win8dev_jp #wpdev_jp 以前作ったものですが、まとめておきましょう。 pronama.xaml プロ生ちゃん XAML pronama.xaml 完了 プロ生ちゃんこと暮井慧 をXAML化したものです。ダウンロードしてご利用いただけます。 プロ生ちゃん CheckBox のつくり方 そして、それをもとにチェックボックスを作る方法。チェックの状態に合わせて表情が変わります。 プロ生ちゃん XAML v1.2 から ユーザーコントロール 最終的に、眉3種、目7種、口5種を個別に設定し、8種類の表情をプロパティで変更できるコントローになったものです。   Windows8 Store アプリ用 XAMLサンプル(XAMLミク) 答えはこれですが、これのXAMLです。まぁ、今回のマスコットアプリ対象ではありませんが。  

Posted by on 31 October 2014 | 3:53 am

Visual Studio 和TFS 2013.4 (Update 4) 的发布候选版

[原文发表地址] Visual Studio and TFS 2013.4 (Update 4) Release Candidate [原文发表时间] 2014/10/16 是时候了。这里是Visual Studio 2013.4和Team Foundation Server 2013.4 的RC版。 下载 Visual Studio 2013 Update 4 RC版 Visual Studio 2013 Update 4 KB 文章 随着产品开发的进展,我写了一些关于CTP的帖子。我会先高度概括一下ALM的提高,然后再谈论一下这个RC版中的新内容。 工作项跟踪和管理敏捷项目的提高—这个更新包括工作项跟踪的许多改进。比如趋势图、 区域路径搜索、更好的支持嵌入URL,性能改进,支持全屏等等。 没有一件事是吹嘘的,总体来说,这些改进都非常好。 利益相关者许可证— — Update 4 包括我前阵子提到的授权更改,这样可以让那些只需要访问基本工作管理和跟踪项目状态的人员不用付费就可以使用。 拉(代码)请求— 支持Git基本的拉(代码)请求/代码审查。 在此RC版本中还有其它的改进: 待办事项上的 bug — — 有些团队想把他们的 bug显示在他们的待办事项中,有些团队只想想显示用户故事/需求。在Update4中,您现在可以设置您的首选项。作为Update 4的一部分,bug会出现在看板上。 在更新 4 中,您现在可以选择您的首选项。作为的其中一部分,现在bug也可以出现在看板上。这是为了能够使他们完全出现在任务板上的一个阶段。 CodeLens 性能的改进— — 我们已经看到大量的问题,如TFS与 CodeLens 工作需占用大量的 CPU 或磁盘空间。我们做了了大量的改进,比如说使CodeLens的速度提高到以前的10 倍,临时添加使用的SQL 服务器空间量。这应该可以解决所有我见过的服务器端的 CodeLens问题。 关联测试套件— — 一条测试用例可以在多个测试套件之间被共享。当您正在修改一条测试用例时,那么能看到相关联的测试套件是一件很棒的事。在Update4中,您可以查看到关联测试套件面板。 最近的测试结果— — 类似于关联测试套件面板,我们也添加了一个新面板,它可以很方便的查看测试用例的最近结果。我给您在这儿展示的是 RTM版本的截图,因为现在的RC版没有RTM版本好。— — 但这些改进也会置入RC版本中的。 测试用例图表— — 为了提供更好的可视性,我们新添加了测试图表功能来使您的测试计划和测试结果可视化。您可以将这些图表锁定到您的项目主页上,以便于所有人都可以看见它们――甚至包括利益相关者。 这个图表在帮您总结。这是Update4 RTM发布之前的最后一个预览版本。如果您现在还没有它,那么是时候使用它了。您在使用过程中的任何问题请让我们知道。当然, VS Online也是一个可以尝试这些改进的很好用的方式。 Brian

Posted by on 31 October 2014 | 3:48 am


[原文发表地址] Chance to Connect(); on What’s Coming Next, November 12th and 13th [原文发表时间] 2014/10/16 在11 月 12 日,我们将举办了一个叫Connect();的在线开发人员的活动,Connect();将是一个和开发人员谈论即将到来的下一代开发工具、服务、和跨微软应用平台的峰会。 查看Connect();事件页 来阅览议程和其他细节。 Connect()基于我们已经到达的程度和过去一年中一直做的工作。当我们为下个月的活动准备时,我想我们该分享一些去年的重点回顾。 我们到达了什么样的程度 一年前,我们推出 Visual Studio 2013,并宣布了Visual Studio Oline可用。到目前为止,开发人员以极高的速度创造了超过700万的VS2013下载量和超过170万Visual Studio Online注册用户。 为实现我们以更快的节奏发布版本的承诺,在一年里我们已经发布了3个Visual Studio 2013的主要更新和15个Visual Studio Online的更新。大多数开发人员正在充分利用这些新的更新。 通过这些更新和额外的技术预览我们已经讨论过我们如何接受mobile-first,cloud-first和 DevOps趋势。 移动 今天的移动开发人员面对着不同设备,Android、 iOS 和 Windows 设备平台,还有各种设备形式因素。在Visual Studio 中,我们一直努力使开发人员可以针对任何移动平台,并尽可能的分享更多的代码和资产。使用 C# 和 Xamarin 或 JavaScript 和 Visual Studio 的Cordova工具 (预览版),Visual Studio 开发人员可以指定目标设备的广度 ,以满足他们的顾客和客户的需求。当面向 Windows 平台时,开发人员可以利用 Visual Studio 中的统一窗口应用程序项目。 云 云提供着令人难以置信的灵活性和对应用程序体系结构和开发实践的新办法。 过去一年中,随着对Windows 和 Linux、 Chef和Puppet,SharePoint和 Oracle、 Java 和 PHP 等等的支持,我们将Azure 平台扩大到所有的开发人员 都可以使用。我们也已经正在讨论我们与.NET的下一步 ,包括开放源码 ASP.NET vNext 和 .NET 编译器平台 (“Roslyn”) 项目,以及 .NET 基础。使任何语言和平台的开发人员也可从Azure 提供的很多新规模的平台服务中受益— — 从 API 管理和机器学习到文档数据库和搜索. 。 DevOps 纵观软件开发行业的每一部分,唯一不变的是应用程序交付的步伐。在 Visual Studio Online中,我们汇集了一整套 DevOps 服务来帮助开发人员贯彻敏捷和 DevOps 的趋势, 从团队协作和敏捷规划, 到发布管理和应用的见解。这些开发人员服务构建在一个开放式的 ALM 平台上,使其能与任何其他工具和通过REST APIs, OAuth and Service Hooks的服务来集成. 接下来是什么 在下个月的Connect(); 我们将有机会谈谈下个创新的浪潮并跨区域的发布。这次活动将包括来自Scott Guthrie, Brian Harry, Scott Hanselman 和我自己的更新,以及产品团队成员对微软各式各样新的开发工具和服务的深入探究, 现在保存Connect()的日期吧 对于开发人员来说,到目前为止,这是很兴奋的一年。当您加入我们11月12日和13日的在线会议,我盼望和您分享更多关于下一个开发浪潮 。 合十 !

Posted by on 31 October 2014 | 3:40 am

Microsoft Supports the iDEA Programme in Partnership with the Duke of York

Getting young people into the digital industry and entrepreneurship Microsoft Ventures in the UK recently announced a new partnership with the Nominet Trust and the Duke of York to help support the iDEA Programme, which aims to support more than one million young people to pursue their own ideas, develop their digital skills and get to grips with the reality of business over the next five years. iDEA is supporting entrepreneurship as a viable career option leading to the development and inception of the small to medium sized businesses of tomorrow.  The programme officially launched on October 15th and is open to the UK’s young people and teachers to get involved. Though this is a UK initiative, it sets a great example that can be mirrored globally and used as inspiration for the important charge of inspiring young people to learn digital skills and build the next generation of big ideas. This concept is best communicated by George The Poet in his amazing iDEA Poem "All Existence is Contribution": (Please visit the site to view this video) We do hope you’ll help us spread the word about this great initiative. As one of the biggest technology companies in the world, and the original tech start-up, Microsoft has a responsibility to help make sure we are encouraging and inspiring more young people to consider careers in technology and computing. This responsibility has led Microsoft to become instrumental in landing the new technical based curriculum within the UK for example. In addition, with Microsoft Ventures, we have demonstrated our commitment to the success of small business on a global scale. The partnership with the Nominet Trust and the Duke of York is, therefore, a perfect fit and will help us support, even further, the attainment of these goals. Through the iDEA Programme, Microsoft will be offering practical support by giving participants access to Dreamspark, a programme which provides students with access to professional-level developer and design tools and resources free of charge, allowing them to explore the world of computing and to develop their passions and skills. Additionally, Microsoft will creating a set of digital badges that young people can take up to get certified in our technologies. In summary, this partnership will provide iDEA with access to a wealth of experience and technical knowledge, as well as the resources and support offered by the Dreamspark programme. We hope you will help us spread the word about this fantastic initiative to young people, teachers and others you know who might be interested in getting involved. To find out more about iDEA, please watch the video above and visit:  

Posted by on 31 October 2014 | 3:30 am