한 주간의 마이크로소프트 소식 [12월 4주차]

 안녕하세요. 한 주간의 마이크로소프트의 개발 관련 주요 소식들을 This Week On Channel 9 영상의 내용을 바탕으로 정리해 보았습니다.  "Orleans" 프로젝트 오픈소스화 헤일로 4의 서비스 아키텍처로 사용되었다고 알려진 프로젝트 "Orleans"가 오픈소스로 공개될 예정입니다. "Orleans"는 동시 컴퓨팅에 사용되는 모델인 Actor Model을 사용하여 클라우드 상의 대규모 컴퓨팅을 가능하게 한다고 합니다. 개발자들이 복잡한 동시성 문제나 스케일링 패턴을 익히지 않고도 "Orleans"를 사용할 수 있도록 만들어 졌다고 하는데요, Codeplex에서 관련 문서와 샘플들을 살펴 보실 수 있고, Preview SDK도 다운로드 받으실 수 있습니다. 오픈소스 공개시기는 2015년 초로 예상하고 있으며, MIT 라이선스로 GitHub에 공개될 예정입니다.  Visual Studio의 Bing 개발자 도구 Visual Studio의 확장기능인 Sample 브라우저와 Bing Code Search를 결합한 Bing 개발자 도구가 공개 되었습니다. 이 도구를 사용하면 Visual Studio 내에서 1900만개에 이르는 코드 조각(Code Snippet)들과 Sample 프로젝트들을 편리하게 사용할 수 있습니다. Contextual Search 기능을 사용하면 컴파일 에러메시지를 이용하여 웹 검색을 할 수도 있고, 각종 API의 사용 방법을 검색할 수도 있습니다. Bing 개발자 도구가 인텔리센스와 연동이 되므로, 코딩을 하면서 필요한 소스코드나 설명을 확인할 수 있는 장점도 있습니다.  Skype 통역기 Preview 공개 Skype 통역기가 Preview로 공개 되었습니다. 이번 Preview에서는 영어-스페인어 간의 통역기능이 제공되며, 40여개 언어의 인스턴스 메시지(IM)를 번역하는 기능을 제공합니다. Skype 통역기는 머신러닝을 사용하고 있어 시간이 갈수록 더 좋은 품질을 제공할 수 있다고 하는데요, Skype 통역기의 작동원리와 개발스토리에 대해 좀 더 알고 싶으신 분들은 링크를 참고 하시기 바랍니다.  사진을 함께 보는 놀라운 방법, Xim 1.3 업데이트 내 스마트폰의 사진을 다른 스마트폰이나 웹 브라우저를 통해 여러 사람들과 함께 볼 수 있는 Xim 앱을 소개합니다. Microsoft Research의 Fuse Labs에서 만든 Xim은 Windows Phone, Android, iOS에서 설치 가능하며, 웹 브라우저에서도 페어링을 통해 동작하므로 PC나 Xbox One, 크롬캐스트 등에서도 사진을 함께 볼 수 있습니다.  Visual Studio Online (VSO) 업데이트 – 코드 편집, 백로그 필터링 등 Visual Studio Online (VSO) 에 대해서 지난 11월 1주차 포스팅을 통해 소개 드린바 있었습니다. VSO는 약 3주의 주기로 업데이트가 이루어 지고 있는데요, 이번 업데이트에서는 웹 브라우저에서 소스코드를 수정하고 커밋할 수 있는 기능, 파일을 추가 / 삭제 할 수 있는 기능이 추가 되었으며 백로그를 필터링하거나 검색할 수 있게 되었습니다. 그 밖에 많은 변경 사항들은 링크를 통해 살펴 보실 수 있으며, VSO와 Team Foundation Server의 로드맵 또한 확인해 보시기 바랍니다.  인텔 갈릴레오 보드에서 윈도우 사용하기 갈릴레오는 256MB의 메모리를 가진 오픈소스 하드웨어 입니다. 그래픽 인터페이스를 제공하지 않지만 Telnet을 통해 다른 PC에서 갈릴레오와 통신할 수 있고, Visual Studio가 설치된 PC가 있다면 갈릴레오에서 임베디드 버전의 Windows를 구동할 수 있습니다.  Kinect Evolution 앱 소스코드 오픈소스화 Kinect for Windows의 센서와 플랫폼의 기능들을 설명하기 위해 마이크로소프트가 만든 Kinect Evolution 앱의 소스코드가 오픈소스화 되었습니다. 이제 데모용으로 Kinect Evolution 앱을 사용하거나 재사용 가능한 컴포넌트로 활용할 수도 있습니다. 링크를 통해 SDK 다운로드 방법과 상세한 설명, 영상등을 확인하시기 바랍니다. (영문)  .NET Framework 오픈소스 프로젝트 목록 .NET Framework와 관련된 약 200여개의 오픈소스 프로젝트들의 목록을 잘 정리해 둔 자료가 GitHub에 공개되었습니다.    레거시 C++ 코드를 C++11/14로 최신화 하기 수년간 다양한 C++ 레거시 프로젝트들을 최신화 해온 개발자의 노하우가 담긴 글을 소개합니다.    다음주 '한 주간의 마이크로소프트 소식'은 쉽니다. 블로그 독자 여러분 모두 행복한 연말 되시길 바라며, 새해에 다시 뵙도록 하겠습니다. 감사합니다.

Posted by on 22 December 2014 | 7:42 pm

The Microsoft Small Basic Guru Award Winners (November 2014)

Nonki and I battled it out last month in the Small Basic Guru Awards! Here are the results...   Small Basic Technical Guru - November 2014     Ed Price - MSFT Small Basic Student Testimonies RZ: "WOW! This is fantastic list. It's very encouraging for everyone to read this!" Michiel Van Hoorn: "This is very very interesting as inspiration for fun ideas in every age category. Also very valuable if your are preparing for teaching kids."   Nonki Takahashi Small Basic: Capitalization Conventions RZ: "Many new learners (especially children) don't distinguish the difference between upper and lower cases. This is a great way to not only introduce that concept but also explain how to do the conversation." Michiel Van Hoorn: "Fundemental reading stuff for the beginning programmer. Learn to do this right in the beginnen and enjoy it forever! Great explanation"   Nonki Takahashi Small Basic: IntelliSense RZ: "A great way to introduce new learners to the concept of IntelliSense." Michiel Van Hoorn: "We take intellisense too much for granted. Learn to love it and you do not do without it. " Also worth a mention were the other entries this month: Small Basic Known Issue: 28245 - Outlines of Shapes Rectangle and Ellipse are Smaller by Nonki TakahashiMichiel Van Hoorn: "Without this article you are probably confused about the behavior differences, but this explains it all"        Hopefully we will see you in December 2014's listings?    - Ninja Ed

Posted by on 22 December 2014 | 7:38 pm

Happy Holidays from the MSDN NZ Team!

23rd December 2014   //   View as Webpage   //   Is this email a forward? Join us! Subscribe here. I want to take a moment to thank you all for your contribution to a fantastic 2014 and wish you a safe and happy holidays with your friends and families. In the last 6 months we have run events in NZ that have been attended by 7,700 people! TechEd was once again a highlight for our team, moving the opening keynote to Vector Arena and reaching the greatest number of attendees since the event started in NZ 19 years ago. For those that didn’t attend we have hundreds of sessions from the event available for you to watch online. Recently we also launched the Australian Azure Datacentres with great support for NZ companies moving their workloads to the cloud. I am personally excited to see New Zealand Cricket join the All Blacks in migrating to Azure in preparation of next year’s world cup events. In addition to this New Zealand customers are going to benefit from Office 365 and Dynamics CRM Online being delivered from Australian datacentres. With the reduction in latency I encourage you to once again look at Azure as a potential to host a development VM to code from any machine with an Internet connection. If you are an MSDN subscriber, activate your Azure benefits to use your MSDN software on Azure for free. I am also really proud of the work that the team and our customers and partners are doing to support our IT students to get relevant work experience before they finish their studies. A real highlight for me this year was to see Team Estimeet (a team that was formed during our summer student programme) win the Innovation Category in the 2014 Microsoft Imagine Cup. I am also encouraged to see last year’s national winners commercialise their idea. The tech Start-Up space in New Zealand is continuing to break ground and we have updated our App Showcase booklet to reflect some of this innovative work. It isn’t just the start-up space where innovation is happening, Datacom has worked with Microsoft to help Zespri become the world’s largest SAP workload to run on Azure. As we sign out 2014 I want to once again thank you for your support and hope to see many of you again in 2015 recharged and ready to go. Thanks, Nigel. Nigel Parker Developer Experience Director Microsoft NZ New Zealand Herald Competition. The New Zealand Herald is currently running a competition: simply download their app, rate their app, fill in your details, and be in to win a Nokia Lumia 930 or a Dell Venue 8" tablet! Easy as! SQL Server 2005 Support Ends April 12, 2016. Support for all versions of SQL Server 2005 ends on April 12, 2016. Start your upgrade conversations and find out more about modernizing your data for breakthrough performance here. Cultural Barriers Surrounding the #DevOps Movement. A Microsoft study found that everybody wants to join the DevOps movement, but resolving cultural barriers between developers and operations is the biggest obstacle. @Saugatuckcloud also discusses five surprising findings about DevOps, and the advantages of DevOps in both large and small organisations. Cross-Platform Development with Visual Studio. Discover how easy it is to develop cross-platform apps with Visual Studio. Are you a Python enthusiast? Develop as if it's always been a part of Visual Studio with the Microsoft Azure/Cloud Camp 10th February (AKL), 12th February (WEL) At this DevCamp you will learn about some of the best features and services of Microsoft Azure and how to move a variety of apps to the cloud. Register now to get your seat and learn what's new. Successful developers are always upgrading their skills. Microsoft Virtual Academy (MVA) offers online Microsoft training delivered by experts. Watch videos, download PowerPoint slides and test yourself as you learn at absolutely no cost. If you want to learn something different, head over to MVA here! Here is the course your Microsoft Technical Evangelists recommend this week: Developing in HTML5 with Javascript and CSS3 Jump Start Over 270,000 developers have enrolled in this course worldwide, making this the top course taken at Microsoft Virtual Academy for developers. At an intermediate to advanced level, it provides an accelerated introduction to HTML5, CSS3 and Javascript, and helps students learn basic programming skills. No experience in HTML5 coding? Students with knowledge in HTML4 should still be able to complete the course. © 2014 Microsoft Corporation    Terms of Use :: Trademarks Microsoft respects your privacy. To learn more, please read our online Privacy Statement. If you would prefer not to receive this newsletter any longer, click here to unsubscribe. To set your contact preferences for other Microsoft communications click here. The Privacy Officer // Microsoft New Zealand Ltd, PO Box 8070 Symonds St, Auckland 1150, New Zealand

Posted by on 22 December 2014 | 7:02 pm

Extending SQL Server 2014 AlwaysOn Resource Group with Storage Spaces on Microsoft Azure

The new Azure Preview Portal makes it super-easy to configure a highly available SQL Server 2014 AlwaysOn Availability Group cluster with a new Azure Resource Group Template . After completing 4 fields of information and clicking a single Create button ...read more...(read more)

Posted by on 22 December 2014 | 6:29 pm

AX - Hotfixes para localización México

A la fecha, estos son los últimos KB’s que se han liberado para la localización México en AX 2012, incluyendo los KB’s que liberan la funcionalidad de Contabilidad Electrónica para AX 2009 y AX 2012. AX 2012 R3 KB Descripción Comportamiento    modificado 3015102 Funcionalidad de Contabilidad electrónica Funcionalidad de Contabilidad electrónica AX 2012 R2 (liberados después de CU7) KB Descripción Comportamiento    modificado 3015097 Funcionalidad de Contabilidad electrónica Funcionalidad de Contabilidad electrónica 3022413 After install KB 2973728 conditional sales   tax does not post as expected when pay a purchase order El IVA no se traslada a la cuenta configurada, después de instalar el KB 2973728 3014082 DIOT report does not show data if invoice and payment are in different month La DIOT imprime las facturas en el mes que fueron pagadas 3014083 MEXICO CFDI - Complement of Original String (Cadena Original del complemento de certificación del SAT) for CFDI is missing La representación impresa del CFDI debe contener la Cadena Original del complemento de certificación del SAT 3001177 CFDI – XML file has Exchange rate multiplied by 100 Para la factura electrónica el archivo XML multiplica el tipo de cambio por 100 3001179 Fee project adjustment for Mexico localization Es posible realizar un ajuste para los diarios de cuotas en proyectos cuando se tiene la localización México 3000945 CFDI incluye número exterior y número interior in XML Se utilizan los campos de AX “Street number” (Número  exterior) y “Build complement” (Número interior) para colocar estos datos en el XML del CFDI 2977599 PDF from CFDI e-mail is blank Al enviar el PDF del CFDI por correo electrónico, el PDF no contiene los datos de folio fiscal, número certificado SAT, Sello digital y Cadena original 2972551 DIOT vendor name is not required El nombre del proveedor no es obligatorio para proveedores nacionales 2974333 DIOT country code should be only 2 characters El reporte DIOT debe considerar el código de país a 2 posiciones (antes lo hacía a 3 posiciones) Por ejemplo. US y no USA 2972548 XML file from CFDI is not as expected when discount exists EL XML en el campo Subtotal (nodo Comprobante) añade el monto de los descuentos. El campo debe incluir el monto antes de descuentos 2954012 Mexico CFDI UTF8 is missing after approval Después de timbrar el CFDI el layout del archivo XML pierde la leyenda ‘encoding UTF8’. El hotfix permite que NO se pierda esta leyenda. 2939195 Mexico CFDI does not remove blank spaces  before or after pipe Al timbrar un CFDI cuya descripción termina con un <enter>, ya sea en factura de servicio o en orden de venta, la factura se rechaza. Adicionalmente se observa un espacio en blanco entre la Descripción y el pipe separador del Sello Digital. El hotfix permite que sea timbrado el documento y no se generan espacio en blanco. 2928391 Mexico Getting error message when running export/import electronic invoice process CFDI Al timbrar un CFDI con muchas líneas envía   mensaje de error: Mexico Getting error message when running export/import electronic invoice process 2926242 RFC for Legal person El campo del RFC en las facturas de proveedor tiene una validación la cual debe permitir el formato del RFC para personas físicas. 2916859 Electronic Invoice - Blank spaces & Cadena  original (CFDI) Al registrar facturas electrónicas usando el método CFDI el sello del XML puede ser incorrecto si hay doble espacio en blanco en la información de la factura. Por lo que deben ser convertidos a un solo espacio en blanco. 2910406 DIOT for Global Vendors El reporte de la DIOT debe generar un archivo tipo .txt agrupando los proveedores globales en una sola línea, en la cual no se imprime ningún RFC 2898090  Data doesn’t retrieve correctly when generating DIOT declaration El reporte de la DIOT sólo debe mostrar información relacionada con impuestos soportados.   AX 2012 RTM (liberados después de CU7) KB Descripción Comportamiento    modificado 2984875 Funcionalidad de Contabilidad electrónica Funcionalidad de Contabilidad electrónica   AX 2009 KB Descripción Comportamiento    modificado 2987417 Funcionalidad de Contabilidad electrónica Funcionalidad de Contabilidad electrónica  

Posted by on 22 December 2014 | 5:13 pm

IntelliTrace Standalone Collector and Application Pools running under Active Directory accounts

You will often configure an ASP.NET web site to run as an Active Directory (AD) user so that the site can access that user’s network resources (e.g. a file share). This is accomplished by changing the identity of the IIS Application Pool the web site runs under. If you try to use the IntelliTrace Standalone Collector with such an application pool while you are logged in with a local user account (i.e. a non-AD user account) you will get this error message: User <domain\username> does not have permissions to read collection plan file "C:\Windows\Temp\DefaultAppPool_collection_plan.ASP.NET.default.xml" This error message is deceiving because the AD user may in fact have the necessary permissions to access the file in question. The actual problem is that you are using a PowerShell prompt to launch the collector and that PowerShell prompt is running under a non-AD account. Non-AD accounts cannot query the AD, so the permissions check altogether fails and you get this message. At this point in time you have the following two options: Add the AD user account used by the application pool to the machine’s local admin group, log in with that user and then run the collector. Some people don’t like this option because it means that, even temporarily, the application pool user is a local admin on the machine. Alternatively, you can add a different Active Directory user to the local admin group on the machine and run the PowerShell prompt as that user. In the future we are considering making the permissions check optional. When the collector is unable to verify permissions of an Active Directory account, we will simply warn you about it and ask if you would like to continue anyway. Is this important to you? Do you want us to work on something different instead? We are always looking for feedback and comments for our features. Please send us a tweet or visit the MSDN Diagnostics forums.

Posted by on 22 December 2014 | 4:37 pm

Project Online: Lost your reports?

A quick shout out for my recent post over on the Project Support blog.  In case you have lost your reports then worth looking at Project Online- Where did my reports go- - and not I haven’t got your reports, but I may be able to help you find them!  

Posted by on 22 December 2014 | 4:27 pm

The Best of Microsoft in the Classroom in 2014

2014 has been a big year for Microsoft in the Classroom. So big that it’s been hard to keep up with all the cool new tools that Microsoft have released, refined and developed. ...(read more)

Posted by on 22 December 2014 | 4:22 pm

EF6.1.2 RTM Available

Today we are pleased to announce the availability of EF6.1.2. This patch release includes a number of high priority bug fixes and some contributions from our community.   What’s in EF6.1.2? EF6.1.2 is mostly about bug fixes, you can see a list of the fixes included in EF6.1.2 on our CodePlex site. We also accepted a couple of noteworthy changes from members of the community: Query cache parameters can be configured from the app/web.configuration file <entityFramework>     <queryCache size='1000' cleaningIntervalInSeconds='-1'/>   </entityFramework> SqlFile and SqlResource methods on DbMigration allow you to run a SQL script stored as a file or embedded resource.   Where do I get EF6.1.2? The runtime is available on NuGet. Follow the instructions on our Get It page for installing the latest version of Entity Framework runtime. The tooling is available on the Microsoft Download Center. You only need to install the tooling if you want to create models using the EF Designer, or generate a Code First model from an existing database. Download tooling for Visual Studio 2012 Download tooling for Visual Studio 2013 The tooling will be included in future releases of Visual Studio 2015 (currently in preview).   Thank you to our contributors We’d like to say thank you to folks from the community who have contributed to the 6.1.2 release so far: BrandonDahler ErikEJ Honza Široký martincostello UnaiZorrilla   What’s next? In addition to working on the next major version of EF (Entity Framework 7), we’re also working on another update to EF6. This update to EF6 is tentatively slated to be another patch release (EF6.1.3) and we are working a series of bug fixes and accepting pull requests.

Posted by on 22 December 2014 | 4:08 pm

12/22 - Errata added for [MS-DTSX2]: Data Transformation Services Package XML Version 2 File Format

Changes to the following subsections: http://msdn.microsoft.com/en-us/library/dn781096.aspx#BKMK_DTSX2  Section 2.4, ExecutableTypePackage Section 2.4.4.1.2.1.1, ConnectionManagerConnectionManagerAttributeGroup Section 2.4.4.1.2.1.5.2, HttpConnectionAttributeGroup Section 2.7.1.1.1.1.5, PipelineComponentComponentClassIDEnum Section 2.7.1.1.2.1, PipelinePathType Section 2.7.1.11.1.1.1, SqlTaskDataType Section 5.8, SQLTask XSD Section 2.7.1.11.1.1.1.10.5, BackupCompressionActionEnum Section 2.7.1.26.1, XMLTaskOperationTypeEnum Section 2.9.2, BaseExecutablePropertyAttributeGroup

Posted by on 22 December 2014 | 1:31 pm

12/22 - Errata added for [MS-WSMV]: Web Services Management Protocol Extensions for Windows Vista

Changes to Section 3.2.4.1.19, Remote Shell Compression: http://msdn.microsoft.com/en-us/library/dn785067.aspx#BKMK_WSMV

Posted by on 22 December 2014 | 1:25 pm

12/22 - Errata added for [MS-RDPBCGR]: Remote Desktop Protocol: Basic Connectivity and Graphics Remoting

Changes to Section 1.3.1.1, Connection Sequence: http://msdn.microsoft.com/en-us/library/dn785069.aspx#BKMK_RDPBCGR

Posted by on 22 December 2014 | 1:24 pm

12/22 - Errata added for [MS-RDPECLIP]: Remote Desktop Protocol: Clipboard Virtual Channel Extension

Changes to Section 2.2.1, Clipboard PDU Header (CLIPRDR_HEADER): http://msdn.microsoft.com/en-us/library/dn785069.aspx#BKMK_RDPECLIP

Posted by on 22 December 2014 | 1:22 pm

12/22 - Errata added for [MS-SMB2]: Server Message Block (SMB) Protocol Versions 2 and 3

Extensive changes to 3 subsections, Section 3.3.5.15.11, Handling a Query Network Interface Request, Section 3.3.5.5.3, Handling GSS-API Authentication, and Section 3.2.5.5, Receiving an SMB2 TREE_CONNECT Response: http://msdn.microsoft.com/en-us/library/dn785067.aspx#BKMK_SMB2

Posted by on 22 December 2014 | 1:20 pm

12/22 - Errata added for [MS-SMBD]: SMB2 Remote Direct Memory Access (RDMA) Transport Protocol

Changes to Section 3.1.5.8, Receiving a Data Transfer Message: http://msdn.microsoft.com/en-us/library/dn785067.aspx#BKMK_SMBD

Posted by on 22 December 2014 | 1:07 pm

12/22 - Errata added for [MS-NLMP]: NT LAN Manager (NTLM) Authentication Protocol

Changes to Section 3.2.5.1.2, Server Receives an AUTHENTICATE_MESSAGE from the Client: http://msdn.microsoft.com/en-us/library/dn785069.aspx#BKMK_NLMP

Posted by on 22 December 2014 | 1:06 pm

12/22 - Errata added for [MS-KILE]: Kerberos Protocol Extensions

Changes to Section 3.3.5.6, AS Exchange: http://msdn.microsoft.com/en-us/library/dn785066.aspx#BKMK_KILE

Posted by on 22 December 2014 | 1:05 pm

12/22 - Errata added for [MS-FSCC]: File System Control Codes

Changes to Section 2.3.81, FSCTL_OFFLOAD_WRITE Reply: http://msdn.microsoft.com/en-us/library/dn785066.aspx#BKMK_FSCC

Posted by on 22 December 2014 | 1:04 pm

12/22 - Errata added for [MS-FSA]: File System Algorithms

Changes to Section 2.1.1.1, Per Volume: http://msdn.microsoft.com/en-us/library/dn785066.aspx#BKMK_FSA

Posted by on 22 December 2014 | 1:02 pm

12/22 - Errata added for [MS-ECS]: Enterprise Client Synchronization Protocol

Multiple changes in Section 2, Messages, and Section 3, Protocol Details: http://msdn.microsoft.com/en-us/library/dn785066.aspx#BKMK_ECS

Posted by on 22 December 2014 | 1:00 pm

12/22 - Errata added for [MS-ADTS]: Active Directory Technical Specification

One change to Section 3.1.1.5.3.7.2, Undelete Constraints: http://msdn.microsoft.com/en-us/library/dn785066.aspx#BKMK_ADTS

Posted by on 22 December 2014 | 12:58 pm

SQL Server 2014 DML Triggers: Tips & Tricks from the Field

 Editor’s note: The following post was written by SQL Server MVP Sergio Govoni SQL Server 2014 DML Triggers: Tips & Tricks from the Field SQL Server 2014 DML Triggers are often a point of contention between Developers and DBAs, between those who customize a database application and those who provides it. They are often the first database objects investigated when the performance degrades. They seem easy to write, but writing efficient Trigger, though complex have a very important characteristic: they allow solving problems that cannot be managed in any other application layer. Therefore, if you cannot work without them, in this article you will learn tricks and best practices for writing and managing them efficiently. All examples in this article are based on AdventureWorks2014 database that you can download from codeplex website at this link. Introduction A Trigger is a special type of stored procedure: it is not called directly, but it is activated on a certain event with special rights that allow you to access in-coming and out-coming data that are stored in special virtual tables called Inserted and Deleted. Triggers exist in SQL Server since the version 1.0, even before CHECK constraint. They always work in the same unit-of-work of the T-SQL statement that has called them. There are different types of Triggers: Logon Trigger, DDL Trigger and DML Trigger; the most known and used type is Data Manipulation Language Trigger, also known as DML Trigger. This article treats only aspects related to DML Triggers. There are many options that modify run time Triggers’ behavior, they are: Nested Triggers Disallowed results from Triggers Server Trigger recursion Recursive Triggers Each of these options has, of course, a default value in respect to the best practices of Triggers development. The first three options are server level options and you can change their default value using sp_configure system stored procedure, whereas the value of the last one can be set at the database level. Are Triggers useful or damaging? What do you think about Triggers? In your opinion, based on your experience, are they useful or damaging? You will meet people who say: “Triggers are absolutely useful” and other people who say the opposite. Who is right? Reading the two bulleted lists you will find the main reasons of the two different theory about Triggers. People say that Triggers are useful because with them: You can develop customize business logics without changing the user front-end or the Application code You can develop an Auditing or Logging mechanism that could not be managed so efficiently in any other application layer People say that Triggers are damaging because: They can execute a very complex pieces of code silently They can degrade performance very much Issues in Triggers are difficult to diagnose As usual the truth is in the middle. I think that Triggers are a very useful tool that you could use when there are no other ways to implement a database solution as efficiently as a Trigger can do, but the user has to test them very well before the deployment in a production environment. Triggers activation order SQL Server has no limitation about the number of Triggers that you can define on a table, but you cannot create more than 2.147.483.647 objects per database; so that the total of Table, View, Stored Procedure, User-Defined Function, Trigger, Rule, Default and Constraint must be lower than, or equal to this number (that is the maximum number that will be represented by the integer data type). Now, supposing that we have a table with multiple Triggers, all of them ready to fire on the same statement type, for example on the INSERT statement: “Have you ever asked yourself which is the exact activation order for those Triggers?” In other worlds, is it possible to guarantee a particular activation order? The Production.Product table in the AdventureWorks2014 database has no Triggers by design. Let’s create, now, three DML Triggers on this table, all of them active for the same statement type: the INSERT statement. The goal of these Triggers is printing an output message that allows us to observe the exact activation order. The following piece of T-SQL code creates three sample DML AFTER INSERT Triggers on Production.Product table. USE [AdventureWorks2014]; GO   -- Create Triggers on Production.Product CREATE TRIGGER Production.TR_Product_INS_1 ON Production.Product AFTER INSERT AS   PRINT 'Message from TR_Product_INS_1'; GO   CREATE TRIGGER Production.TR_Product_INS_2 ON Production.Product AFTER INSERT AS   PRINT 'Message from TR_Product_INS_2'; GO   CREATE TRIGGER Production.TR_Product_INS_3 ON Production.Product AFTER INSERT AS   PRINT 'Message from TR_Product_INS_3'; GO   Let’s see all Triggers defined on Production.Product table, to achieve this task we will use the sp_helptrigger system stored procedure as shown in the following piece of T-SQL code. USE [AdventureWorks2014]; GO EXEC sp_helptrigger 'Production.Product'; GO   The output is shown in the following picture.   Picture 1 – All Triggers defined on Production.Product table   Now the question is: Which will be the activation order for these three Triggers? We can answer to this question executing the following INSERT statement on Production.Product table, when we execute it, all the DML INSERT Triggers fire. USE [AdventureWorks2014]; GO   INSERT INTO Production.Product (   Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,   ReorderPoint, StandardCost, ListPrice, DaysToManufacture, SellStartDate,   RowGUID, ModifiedDate ) VALUES (   N'CityBike', N'CB-5381', 0, 0, 1000, 750, 0.0000, 0.0000, 0, GETDATE(),   NEWID(), GETDATE() ); GO   The output returned shows the default Triggers activation order. Message from TR_Product_INS_1 Message from TR_Product_INS_2 Message from TR_Product_INS_3   As you can see in this example, Triggers activation order coincides with the creation order, but by design, Triggers activation order is undefined. If you want to guarantee a particular activation order you have to use the sp_settriggerorder system stored procedure that allows you to set the activation of the first and of the last Trigger. This configuration can be applied to the Triggers of each statement (INSERT/UPDATE/DELETE). The following piece of code uses sp_settriggerorder system stored procedure to set the Production.TR_Product_INS_3 Trigger as the first one to fire when an INSERT statement is executed on Production.Product table. USE [AdventureWorks2014]; GO   EXEC sp_settriggerorder   @triggername = 'Production.TR_Product_INS_3'   ,@order = 'First'   ,@stmttype = 'INSERT'; GO   At the same way, you can set the last Trigger fire. USE [AdventureWorks2014]; GO   EXEC sp_settriggerorder   @triggername = 'Production.TR_Product_INS_2'   ,@order = 'Last'   ,@stmttype = 'INSERT'; GO   Let’s see the new Triggers activation order by executing another INSERT statement on Production.Product table. USE [AdventureWorks2014]; GO   INSERT INTO Production.Product (   Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,   ReorderPoint, StandardCost, ListPrice, DaysToManufacture, SellStartDate,   RowGUID, ModifiedDate ) VALUES (   N'CityBike Pro', N'CB-5382', 0, 0, 1000, 750, 0.0000, 0.0000, 0, GETDATE(),   NEWID(), GETDATE() ); GO   The returned output shows our customized Triggers activation order. Message from TR_Product_INS_3 Message from TR_Product_INS_1 Message from TR_Product_INS_2   In this session you have learnt how to set the activation of the first and of the last Trigger in a multiple DML AFTER INSERT Triggers scenario. Probably, one question has come to your mind: “May I set only the first and the last Trigger?” The answer is: “Yes, currently you have the possibility to set only the first Trigger and only the last Trigger for each statement on a single table”; as a friend of mine says (he is a DBA): “You can set the activation only of the first and of the last Trigger because you should have three Triggers maximum for each statement on a single table! The sp_settriggerorder system stored procedure allows you to set the first and the last Trigger fires, so that the third one will be in the middle, between the first and the last”. Triggers must be thought to work on multiple rows One of the most frequent mistakes I have seen during my experience in Triggers debugging and tuning is: the author of the Trigger doesn’t consider that his Trigger will work on multiple rows, sooner or later! I have seen many Triggers, especially those ones that implement domain integrity constraints, which were not thought to work on multiple rows. This mistake, in certain cases, produces the storing of incorrect data (an example will follow). Suppose that you have to develop a DML AFTER INSERT Trigger to avoid to store values lower than 10 in the SafetyStockLevel column of the Production.Product table in the AdventureWorks2014 database. This customized business logic may be required to guarantee no production downtime in your company when a supplier is late in delivering. The following piece of T-SQL code shows the CREATE statement for the Production.TR_Product_StockLevel Trigger. USE [AdventureWorks2014]; GO   CREATE TRIGGER Production.TR_Product_StockLevel ON Production.Product AFTER INSERT AS BEGIN   /*     Avoid to insert products with value of safety stock level lower than 10   */   BEGIN TRY     DECLARE       @SafetyStockLevel SMALLINT;       SELECT       @SafetyStockLevel = SafetyStockLevel     FROM       inserted;       IF (@SafetyStockLevel < 10)       THROW 50000, N'Safety Stock Level cannot be lower than 10!', 1;   END TRY   BEGIN CATCH     IF (@@TRANCOUNT > 0)       ROLLBACK;     THROW; -- Re-Throw   END CATCH; END; GO   A very good habit, before applying Triggers and changes (in general) in the production environment, is to spend time to test the Trigger code, especially for the borderline cases and values. So, in this example you have to test if this Trigger is able to reject each INSERT statement that tries to store values lower than 10 into SafetyStockLevel column of the Production.Product table. The first test you can do, for example, is trying to insert one wrong value to observe the error caught by the Trigger. The following statement tries to insert a product with SafetyStockLevel lower than 10. USE [AdventureWorks2014]; GO   -- Test one: Try to insert one wrong product INSERT INTO Production.Product (Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,  ReorderPoint, StandardCost, ListPrice, DaysToManufacture,  SellStartDate, rowguid, ModifiedDate) VALUES (N'Carbon Bar 1', N'CB-0001', 0, 0, 3 /* SafetyStockLevel */,  750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE());   As you expect, SQL Server has rejected the INSERT statement because the value assigned to SafetyStockLevel is lower than 10 and the Trigger Production.TR_Product_StockLevel has blocked the statement. The output shows that Trigger worked well. Msg 50000, Level 16, State 1, Procedure TR_Product_StockLevel, Line 17 Safety Stock Level cannot be lower than 10!   Now you have to test the Trigger for statements that try to insert multiple rows. The following statement tries to insert two products: the first product has a wrong value for SafetyStockLevel column, whereas the value in second one is right. Let’s see what happens. USE [AdventureWorks2014]; GO   -- Test two: Try to insert two products INSERT INTO Production.Product (Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,  ReorderPoint, StandardCost, ListPrice, DaysToManufacture,  SellStartDate, rowguid, ModifiedDate) VALUES (N'Carbon Bar 2', N'CB-0002', 0, 0, 4  /* SafetyStockLevel */,  750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()), (N'Carbon Bar 3', N'CB-0003', 0, 0, 15 /* SafetyStockLevel */,  750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()); GO   The output shows that the Trigger has worked well again, SQL Server has rejected the INSERT statement because in the first row the value 4 for the SafetyStockLevel column is lower than 10 and it can’t be accepted. Msg 50000, Level 16, State 1, Procedure TR_Product_StockLevel, Line 17 Safety Stock Level cannot be lower than 10!   If you have to deploy your Trigger as soon as possible, you could convince yourself that this Trigger works properly, after all you have already done two tests and all wrong rows were rejected. You decide to apply the Trigger in the production environment; but what happens if someone or an application tries to insert two products, in which there is one wrong value put in an order that differs from the one you used in the previous test? Let’s see the following INSERT statement in which the first row is right and the second one is wrong. USE [AdventureWorks2014]; GO   -- Test three: Try to insert two rows -- The first row one is right, but the second one is wrong INSERT INTO Production.Product (Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,  ReorderPoint, StandardCost, ListPrice, DaysToManufacture,  SellStartDate, rowguid, ModifiedDate) VALUES (N'Carbon Bar 4', N'CB-0004', 0, 0, 18 /* SafetyStockLevel */,  750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()), (N'Carbon Bar 5', N'CB-0005', 0, 0, 6 /* SafetyStockLevel */,  750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()); GO   The last INSERT statement has been completed successfully, but inserted data do not respect the domain constraint implemented by the Trigger, as you can see in the following picture.   Picture 2 – Safety stock level domain integrity violated for product named “Carbon Bar 5”   The safety stock level value for the product named “Carbon Bar 5” doesn’t respect the business constraint implemented by the Trigger Production.TR_Product_StockLevel; this Trigger hasn’t been thought to work on multiple rows. The mistake is in the following assignment line: SELECT   @SafetyStockLevel = SafetyStockLevel FROM   Inserted;   The local variable named @SafetyStockLevel can contain only one value from the SELECT on the Inserted virtual table and this value will be the SafetyStockLevel value corresponding to the first row that is returned from the statement. If the first row (that one returned from the query) has a suitable value in the SafetyStockLevel column, the Trigger will consider right the others as well. In this case, not allowed values (lower than 10) from the second row on, will be stored anyway! How can the Trigger’s author fix this issue? He can fix it by checking SafetyStockLevel value on all rows in the Inserted virtual table, and if the Trigger finds just one value which is not allowed it will return an error. Below here, there is the version 2.0 of the Trigger Production.TR_Product_StockLevel, it fixes the issue changing the previous SELECT statement in an IF EXISTS SELECT statement. USE [AdventureWorks2014]; GO   ALTER TRIGGER Production.TR_Product_StockLevel ON Production.Product AFTER INSERT AS BEGIN   /*     Avoid to insert products with value of safety stock level lower than 10   */   BEGIN TRY     -- Testing all rows in the Inserted virtual table     IF EXISTS (                SELECT ProductID                FROM inserted                WHERE (SafetyStockLevel < 10)               )       THROW 50000, N'Safety Stock Level cannot be lower than 10!', 1;   END TRY   BEGIN CATCH     IF (@@TRANCOUNT > 0)       ROLLBACK;     THROW; -- Re-Throw   END CATCH; END; GO   This new version is thought to work on multiple rows and it always works properly. However the best implementation for this business logic is by using CHECK constraint that is the best way to implement customize domain integrity. The main reason to prefer CHECK constraints instead of the Triggers, when you have to implement customize domain integrity, is that all constraints (such as CHECK, UNIQUE and so on) will be checked before the execution of the statement that fires it. On the contrary, AFTER DML Triggers will fire after the statement has been executed. As you can imagine, for performance reasons, in this scenario, the CHECK constraint solution is better than the Trigger solution. Trigger debug The most important Programming Languages have debugging tools integrated into the development tool. Debugger usually has a graphic interface that allows you to inspect the variables values at run-time to analyze source code and program flow row-by-row and finally to manage breakpoints. Each developer loves debugging tools because they are very useful when a program fails in a calculation or when it returns into an error. Now, think about a Trigger that performs a very complex operation silently. Suppose that this Trigger works into a problem; probably, this question comes to your mind: “Can I debug a Trigger” and if it is possible, “How can I do it?” Debugging a Trigger is possible with Microsoft Visual Studio development tool (except Express edition). Consider the first version of the Trigger Production.TR_Product_StockLevel created in the section “Triggers must be thought to work on multiple rows” at the beginning of this article. As you have already seen, the first version of that Trigger doesn’t work well with multiple rows because it hadn’t been thought to work with multiple rows. The customer in which you deployed that Trigger complains that some products have the safety threshold saved in the SafetyStockLevel column lower than 10. You have to debug that DML AFTER INSERT Trigger, below here you will learn how to do it. The first step to debug a Trigger is to create a stored procedure that encapsulates the statement that is able to fire the Trigger that you want to debug. Right, we have to create a stored procedure that performs an INSERT statement to the Production.Product table of the AdventureWorks2014 database. The following piece of T-SQL code creates the Production.USP_INS_PRODUCTS stored procedure in the AdventureWorks2014 database. USE [AdventureWorks2014]; GO   CREATE PROCEDURE Production.USP_INS_PRODUCTS AS BEGIN   /*     INSERT statement to fire Trigger TR_Product_StockLevel   */   INSERT INTO Production.Product   (Name, ProductNumber, MakeFlag, FinishedGoodsFlag, SafetyStockLevel,    ReorderPoint, StandardCost, ListPrice, DaysToManufacture,    SellStartDate, rowguid, ModifiedDate)   VALUES   (N'BigBike8', N'BB-5388', 0, 0, 10 /* SafetyStockLevel */,    750, 0.0000, 78.0000, 0, GETDATE(), NEWID(), GETDATE()),   (N'BigBike9', N'BB-5389', 0, 0, 1  /* SafetyStockLevel */,    750, 0.0000, 62.0000, 0, GETDATE(), NEWID(), GETDATE()); END;   The second step consists in the execution of the stored procedure, created in the previous step, through Microsoft Visual Studio. Open Microsoft Visual Studio and surf into SQL Server Object Explorer, open the AdventureWorks2014 database tree, expand Programmability folder and try to find out the Production.USP_INS_PRODUCTS stored procedure into Stored Procedures folder. Next, press right click on Production.USP_INS_PRODUCTS stored procedure, a context pop-up menu will appear and when you select the item “Debug Procedure…”, a new SQL Query page will be open and it will be ready to debug the stored procedure as you can see in the following picture.   Picture 3 – Debugging USP_INS_PRODUCTS stored procedure through Microsoft Visual Studio   The execution pointer is set to the first executable instruction of the T-SQL script automatically generated by the Visual Studio Debugger Tool. Using step into debugger function (F11) you can execute the Production.USP_INS_PRODUCTS stored procedure step-by-step up to the INSERT statement that will fire the Trigger you want to debug. If you press step into button (F11) when the execution pointer is on the INSERT statement, the execution pointer will jump into the Trigger, on the first executable statement, as shown in the following picture.   Picture 4 – Breakpoint within a Trigger   Debugger execution pointer is now on the first executable statement of the Trigger, now you can execute the Trigger’s code and observe variables content step-by-step. In addition, you can see the exact execution flow and the number of rows affected by each statement. If multiple Triggers fire on the same statement, the Call Stack panel will show the execution chain and you will be able to discover how the Trigger’s code works. Statements that each Trigger should have A Trigger is optimized when its duration is brief, it always works within a transaction and its locks will remain active till the transaction will is committed or rolled back. As you can imagine, the more time the Trigger needs to execute, the higher the possibility that the Trigger will lock another process in the system will be. The first thing you have to do to ensure that the Trigger execution will be short is to establish if the Trigger has to do something or not. If there are no rows affected in the statement that has called the Trigger, this means that there are no things for the Trigger to do. So, the first thing that a Trigger should do is to check the number of rows affected by the previous statement. The system variable @@ROWCOUNT allows you to know how many rows have been changed by the previous DML statement. If the previous DML statement hasn’t changed the rows, the value of the system variable @@ROWCOUNT will be zero, so that there are no things that the Trigger has to do except giving back the control flow to the caller by the RETURN (T-SQL) command. The following piece of code should be placed at the beginning of all Triggers. IF (@@ROWCOUNT = 0)   RETURN;   Checking the @@ROWCOUNT system variable allows you to verify if the number of rows affected is the number you expect, if not, the Trigger can give back the control flow to the caller. In a Trigger active on multiple statement, you can query the virtual table Inserted and Deleted to know the exact number of inserted and updated (or deleted) rows. After that, you should consider that for each statement executed, SQL Server sends back to the client the number of rows affected, so if you aren’t interested about the number of rows affected by each statement within a Trigger, you can set to ON the NOCOUNT option at the beginning of the Trigger and at the end you can flip back the value to OFF. In this way, you will reduce network traffic dramatically. In addition, you could check if interested columns are updated or not. The UPDATE (T-SQL) function allows you to know if the column passed by is updated or not (within an update Trigger) and if the column is involved into an INSERT statement (within an insert Trigger). If the column is not updated, the Trigger has another chance to give back the control flow to the caller or it goes on. In general, an update Trigger has to do something when a column is updated and its values are changed; if there are no changed values, probably the Trigger has another chance to give back the control flow to the caller. You can check if the values are changed by querying the virtual tables Inserted and Deleted. Summary Triggers seem easy to write, but writing efficient Triggers as demonstrated is not simple task. A best practice is to test them thoroughly before the deployment in your production environment. A good habit is putting inside them lots of comments, especially before complex statements that may confuse even the trigger writer.   About the author Since 1999 Sergio Govoni has been a software developer; in the 2000 he received degrees in Computer Science from The Italy State University. He has worked for over 11 years in a software house that produces multi-company ERP on Win32 platform. Today, at the same company, he is a program manager and software architect and he is constantly involved on several team projects, where he takes care of the architecture and the mission-critical technical details. Since 7.0 version he has been working with SQL Server and he has a deep knowledge of Implementation and Maintenance Relational Databases, Performance Tuning and Problem Solving skills. He also works training people on SQL Server and its related technologies, writing articles and participating actively, as speaker, at conference and workshops UGISS (www.ugiss.org), the first and most important Italian SQL Server User Group. He has the following certifications: MCP, MCTS SQL Server. Sergio lives in Italy and loves to travel around the world. When he is not at work to deploy new software and increase his knowledge of Technologies and SQL Server, Sergio enjoys spending time with his friends and with his family. You can meet him at conferences or Microsoft events. Follow him on Twitter or read his blogs in Italian and English   About MVP Mondays The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.

Posted by on 22 December 2014 | 12:16 pm

Using Azure Storage on Linux

We want to provide an update for Linux users on how to use Azure Storage, and we are pleased to announce some new options. Currently, we are testing with Ubuntu 14.04; we will look at feedback to determine which additional distros to include over time. Java Our Java library, which we announced the General Availability for earlier this year, has now been fully stress-tested with Linux. You can get the latest Java library through Maven (http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22azure-storage%22) or as source code (https://github.com/Azure/azure-storage-java). Node.js We released an updated Node.js library earlier this year, in preview for both Windows and Linux. You can get the node.js library through npm (https://www.npmjs.org/package/azure-storage) or Github (https://github.com/Azure/azure-storage-node). C++ We are pleased to announce that we have released preview version 0.4.0 of our C++ library, which now compiles for both Windows and Linux. 0.4.0 also contains new features including blob download auto-resume functionality and control over the internal buffer size used in the HTTP layer, so we recommend that everyone using an older version upgrades. Compiling from source is supported for both Windows and Linux; the source code is available through GitHub (https://github.com/Azure/azure-storage-cpp). Binaries for Windows are also available through NuGet (http://www.nuget.org/packages/wastorage/). Getting Started on Linux The Azure Storage Client Library for C++ depends on Casablanca. Follow these instructions to compile it. Version 0.4.0 of the library depends on Casablanca version 2.3.0. Once this is complete, then: Clone the project using Git: git clone https://github.com/Azure/azure-storage-cpp.git The project is cloned to a folder called azure-storage-cpp. Always use the master branch, which contains the latest release. Install additional dependencies: sudo apt-get install libxml++2.6-dev libxml++2.6-doc uuid-dev Build the SDK for Release: cd azure-storage-cpp/Microsoft.WindowsAzure.Storage mkdir build.release cd build.release CASABLANCA_DIR=<path to Casablanca> CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release make In the above command, replace <path to Casablanca> to point to your local installation of Casablanca. For example, if the file libcpprest.so exists at location ~/Github/Casablanca/casablanca/Release/build.release/Binaries/libcpprest.so, then your cmake command should be: CASABLANCA_DIR=~/Github/Casablanca/casablanca CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release The library is generated under azure-storage-cpp/Microsoft.WindowsAzure.Storage/build.release/Binaries/. Once you have built the library, the samples should work equally well for Windows and Linux. If you like, you can build the samples as well: cd ../samples vi SamplesCommon/samples_common.h – edit this file to include your storage account name and key mkdir build.release cd build.release CASABLANCA_DIR=<path to Casablanca> CXX=g++-4.8 cmake .. -DCMAKE_BUILD_TYPE=Release Make To run the samples: cd Binaries cp ../../BlobsGettingStarted/DataFile.txt . (this is required to run the blobs sample) ./samplesblobs (run the blobs sample) ./samplestables (run the tables sample) ./samplesqueues (run the queues sample) The getting-started samples in this blog post are also helpful: http://blogs.msdn.com/b/windowsazurestorage/archive/2013/12/20/windows-azure-storage-client-library-for-cplusplus-preview.aspx Differences between Windows and Linux Client Libraries The only major difference is in logging. On Windows, we use ETW logging directly. On Linux, we use Boost logging, which means that you can plug in your own sinks as you see fit. Each operation_context has a boost::log::sources::severity_logger<boost::log::trivial::severity_level>; if you want fine-grained control over logging feel free to set your own logger objects. Note that in addition to what Boost provides, we have an internal log_level that we use. Each operation_context gets a log_level that you can set. The default value is set by client_log_level operation_context::default_log_level, which you can also set to turn logging on or off for the library as a whole. The default is that logging is off entirely. What’s next We’re excited about supporting Linux-based usage of Azure Storage from Java, Node.js, and C++. We encourage you to try it out and let us know where we can improve by leaving feedback on GIthub or on this blog. We’ll be working to bring these all to general availability. Adam Sorrin and Jeff Irwin Microsoft Azure Storage Team

Posted by on 22 December 2014 | 12:15 pm

SQL Server 2014 High-Availability and Multi-Datacenter Disaster Recovery with Multiple Azure ILBs

I already posted in my blog several articles related to high-availability (HA) and disaster recovery (DR) for SQL Server in Azure Virtual Machines (IaaS VM), but this time I’m going to add something new to what you may already know, then let me recap the situation. Today, if you want HA in Azure for SQL Server, you have to use AlwaysOn Availability Group since SQL Mirroring is a deprecated feature and Failover Clustering (AlwaysOn FCI) is not supported yet, due to the lack of shared storage being possible in Azure VMs. Prolog The first scenario I covered, more than one year ago, was related to implementing a simple AlwaysOn Availability Group, with two SQL Server instances in a single Azure datacenter and no support for Azure Internal Load Balancer (ILB): SQL Server 2012 AlwaysOn Availability Group and Listener in Azure VMs: Notes, Details and Recommendations http://blogs.msdn.com/b/igorpag/archive/2013/09/02/sql-server-2012-alwayson-availability-group-and-listener-in-azure-vms-notes-details-and-recommendations.aspx As you can easily realize, this architecture presents two major weak points:   No DR site for protection from a complete Azure datacenter loss; An internet facing endpoint for SQL Server AlwaysOn Availability Group (AG) listener must be exposes; The second scenario that I worked on, with the possibility to connect Azure Virtual Networks (VNETs) in different regions, included a DR site for Geo-Disaster Recovery over a second Azure datacenter: Deep Dive: SQL Server AlwaysOn Availability Groups and Cross-Region Virtual Networks in Azure http://blogs.msdn.com/b/igorpag/archive/2014/07/03/deep-dive-sql-server-alwayson-availability-groups-and-cross-region-virtual-networks-in-azure.aspx At this time, Azure Internal Load Balancer (ILB) was not supported yet, and then you had to expose a SQL Server endpoint over the Internet, and use the Cloud Service Virtual IP (VIP) to provide access to SQL Server instances through the AG Listener.  Architecture Finally, Microsoft recently announced support for Azure ILB usage for AlwaysOn AG Listener, that is the last missing piece to have the perfect HA and Geo-DR architecture. This brings me to the third (and last for now) scenario that I recently implemented for one of my partner: Let me recap here the main architectural choices and points of attentions in Azure: No public endpoint exposed over the Internet for SQL Server: in both the primary and the secondary sites I used Azure ILB (1 for each site), then only accessible from services and VMs in the same VNETs used here. Be aware that only one ILB per Cloud Service can be used. I created one VNET in the primary Azure datacenter and one VNET in the secondary Azure datacenter, then I connected them using Azure VPN. In this this specific scenario, high-performance Azure VPN Gateway has been used since more than enough in term of supported bandwidth, be sure to review its characteristics before deciding to adopt it: Azure Virtual Network Gateway Improvements http://azure.microsoft.com/blog/2014/12/02/azure-virtual-network-gateway-improvements If this VPN Gateway will not satisfy your bandwidth requirements, you need to consider Azure Express Route as indicated in the link below: ExpressRoute or Virtual Network VPN – What’s right for me? http://azure.microsoft.com/blog/2014/06/10/expressroute-or-virtual-network-vpn-whats-right-for-me In the primary Azure datacenter, I installed two SQL Server 2014 VMs in the same Cloud Service (CS) and Availability Set to ensure 99,95% HA as requested by Azure SLA. I also installed them in the same VNET and same subnet. Each SQL VM uses static IP addresses inside the VNET. Manage the availability of virtual machines http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-manage-availability For the above SQL Server instances, I configured synchronous data replication with automatic failover: in this way, in case of a single SQL VM failure, AlwaysOn AG will take over and failover to the second SQL Server VM with no data loss and no manual intervention: NOTE: there is nothing here preventing you to allow readable secondaries, you can change this AlwaysOn configuration setting dynamically without any service interruption. In the secondary Azure datacenter, I installed a third single VM in its own CS, AS, VNET and subnet, using a static IP address: this is the minimum requirement, to have an effective DR solution, but if you want to have more protection you can also install a fourth SQL VM here to maintain HA also in the case of complete primary datacenter loss. As you can see in the picture above, I used asynchronous data replication for the third SQL instance for the following reasons: SQL Server 2014 allows only 3 sync replicas (1 primary + 2 secondaries) but only 2 instances for automatic failover; Enabling synchronous data replica between remote datacenter will hurt the database performances on the primary instance since each transaction should be also committed by the remote SQL Server instance; IMPORTANT: Async data replication means possible data loss (RPO>0) in case of a complete primary datacenter loss. If you want zero data loss (RPO=0), you should configure synchronous data replication also for the SQL Server instance in the DR site, but it’s highly recommended to test the performance impact of network latency between the two remote Azure datacenters. Be also aware that automatic failover between datacenters is not possible today with SQL Server 2014 (RTO>0). All the above SQL Server instances are part of the same AlwaysOn AG and Cluster. Since it is a requirement for Cluster and AlwaysOn AG, all the VMs in both datacenters are part of the same Active Directory (AD) Domain: for this reason I installed two DCs in the primary datacenter (same own Cloud Service, VNET, subnet and Availability Set) and one DC in the secondary datacenter (separate Cloud Service, VNET, subnet and Availability Set). Each DC is also a DNS Server and uses static IP addresses inside the VNET. To complete the Cluster configuration required for this scenario, I removed the quorum vote for the SQL Server VM in the secondary DR site and created a new VM in the primary datacenter (same Cloud Service, VNET, subnet and Availability Set as for SQL Server VMs): no SQL Server installed in this VM, its only purpose is to be the Cluster Witness and provide a cluster vote to reach the quorum and then ensure Cluster healthy state in case of secondary datacenter loss. Configure and Manage the Quorum in a Windows Server 2012 Failover Cluster http://technet.microsoft.com/en-us/library/jj612870.aspx Finally, I used Network Security Groups (NSG) to harden the security configuration and have strict control over the possible network communications between different subnets in both VNETs: Network Security Groups http://azure.microsoft.com/blog/2014/11/04/network-security-Groups Since I used Azure ILB, then an internal non-internet facing IP (DIP), and due to the network restrictions on supporting AlwaysOn AG in Azure, I installed the application VMs in a different Cloud Service, subnet and Availability Set, but inside the same VNET in each site. This is necessary in order for the application to be able to access SQL databases over the Azure AG Listener. Azure Storage accounts are an important part here since all the VM OS disks and additional data disks must reside on persistent Azure Blob storage. If for some (or all) VMs you are going to use multiple disks, as it’s likely to happen for SQL Server VMs, it’s highly recommended to do not use Azure geo-replication for used storage accounts since not supported. Each disk is a blob and storage replication in Azure is asynchronous and can ensure write ordering (then consistency) only at the single blob level, not between multiple blobs (= disks). Please note that geo-replication (GRS) is enabled by default when creating a new Azure storage account, be sure to use “Locally Redundant” (LRS): Azure Storage Redundancy Options http://msdn.microsoft.com/en-us/library/azure/dn727290.aspx In addition to Azure storage account replication mode, you also need to carefully consider how many storage accounts you need to use. For Azure Standard Storage, there is a global limit of 20K IOPS, this means a maximum number of 40 disks 1TB each (500 IOPS per disk). Depending on the VM size, you can use up to a certain amount of disks, then since you have to accommodate at least two SQL Server VMs, in addition to the Domain Controllers and the Witness VM, be sure to do your maths correctly and eventually use more than one storage account in each Azure datacenter deployment. In the first link below, Azure Premium Storage (currently in preview) is also mentioned: with this option, you can have up to 5K IOPS per single disk and up to 50K IOPS per VM. Azure Storage Scalability and Performance Targets http://msdn.microsoft.com/library/azure/dn249410.aspx Virtual Machine and Cloud Service Sizes for Azure http://msdn.microsoft.com/en-us/library/azure/dn197896.aspx Inside SQL Server VMs, I used the maximum amount of disks permitted by the specific VM size, then I used Windows Server 2012 R2 (Guest OS) “Storage Pool” technology to group together all the physical disks and present a unique logical volume with increased IOPS, more details are reported at the link below: Best Practices & Disaster Recovery for Storage Spaces and Pools in Azure http://blogs.msdn.com/b/igorpag/archive/2014/06/10/best-practices-amp-disaster-recovery-for-storage-spaces-and-pools-in-azure.aspx Multiple ILBs Configuration The SQL team already published a step-by-step procedure on how to create an Azure Listener for AlwaysOn using Azure Internal Load Balancer, let’s use the article below as a starting point: Tutorial: Listener Configuration for AlwaysOn Availability Groups http://msdn.microsoft.com/en-us/library/azure/dn425027.aspx This article describe how to setup a Listener using ILB but only for a single datacenter, then using a single ILB. Here is what you have to do to build a multi-datacenter configuration using one ILB for each site: Step[2.9]: Execute first for the primary datacenter, then repeat again for the secondary datacenter. Be sure to include for each VNET only the VMs allocated inside it, and obviously be sure to sure different Cloud Service, ILB Name, Subnet Name and Static IP: Step[5.6]: Here you need to have 2 IP addresses, one for each VNET, be sure to configure the dependency of the CAP network name on these IPs using “OR” and not “AND”: Step[5.11]: Execute for each VNET and related SQL VM Cloud Service: For the Cluster network name, be sure to use the right network name for your VNET as shown in the Failover Cluster Manager: For the IP Resource Name, be sure to use the one in the list of dependencies mentioned at the previous point that is related to the current VNET; For the ILB IP, be sure to use the ILB created locally for each VNET: Now you should complete the procedure mentioned in the article and test the failover behavior of the AG and its associated Listener. If everything is setup correctly, the AG should come always online succesfully, but with only one underlying IP as shown in the picture below:   SQL Server 2014 and Azure Even if not strictly related to HA & DR, if you want to have the maximum from SQL Server 2014 when installed in Azure, I strongly recommend you to evaluate the following “integration features”: SQL Server 2014 Backup to Azure Blob Storage:  SQL Server 2014 has the possibility to use Azure Blob storage as target media for database backups: no VM disks are necessary to store your backups, you can directly use Azure Blob storage and then have benefits from 3 local replicas and remote 3 replicas for geo-DR. Backup set will be encrypted.  You can read more details at the following link: SQL Server Backup to URL http://msdn.microsoft.com/en-us/library/dn435916.aspx SQL Server 2014 Managed Backup:  This a new way to manage backup in SQL Server: the engine itself will take care of taking backup of existing and future (non-existing) databases, based on policies that the DBA can define; this is not simply scheduling, it’s a dynamic and intelligent mechanism: the backup strategy used by SQL Server Managed Backup to Windows Azure is based on the retention period and the transaction workload on the database. This feature will use Azure Blob storage as media target, as explained at the previous point.  You can read more details at the following link:   SQL Server Managed Backup to Windows Azure http://msdn.microsoft.com/en-us/library/dn449496(v=sql.120).aspx Azure D-SERIES local temporary SSD storage:  Azure D-SERIES VMs, come with very powerful local temporary SSD disk, you can everage this important resource in the following SQL Server 2014 features: Buffer Pool extension: with this feature, SQL Server is able to use SSD storage to enhance its cache and then providing an extra-layer for data caching, you can read more details at the link below: Buffer Pool Extension http://msdn.microsoft.com/en-us/library/dn133176.aspx TEMPDB allocation: since the VM local SSD drive is temporary and not fully persistent, using it for TEMPDB database allocation is an optimal and recommended choice. You can then have high-performant TEMPDB without wasting Azure persistent data dsks that can be fully used for user databases. You can read more details at he following links: Using SSDs in Azure VMs to store SQL Server TempDB and Buffer Pool xtensions http://blogs.technet.com/b/dataplatforminsider/archive/2014/09/25/using-ssds-in-azure-vms-to-store-sql-server-tempdb-and-buffer-pool-extensions.aspx That’s all folks! This is my last blog post in the current year, I wish you a merry Christmas and happy new year. Let me know if you have any feedback or question, as usual you can follow me on Twitter (@igorpag). Regards.      

Posted by on 22 December 2014 | 11:53 am

Microsoft Remote Desktop Preview V8.1.7 update for Windows Phone 8.1 available today for download

My name is David Bélanger, and I work in the Remote Desktop team. Today marks the release of the last update to the Microsoft Remote Desktop Preview app for Windows Phone 8.1 for 2014. We are still hard at work on enabling support for Remote Desktop Gateway and Remote Resources (RemoteApp and Desktop Connections), but as we are still putting the final touches on these features, we are delaying their release to early next year. Today’s release includes some improvements to Azure RemoteApp, a new Azure service available from Microsoft. For details on the addition of this new service to the Windows Phone app, check out last month’s release blog. Continue voting for the features you’d like to see in future versions of the app on our feature requests site as it will help us ensure we build the right functionality into the app. A link to the feedback site is also available directly inside the app from the about page. Pinning apps to the Start Screen To reduce the number of steps needed to get to the apps you care about, we are adding the ability to pin apps available on your Windows Phone from Azure RemoteApp directly to your Start Screen. To get started, press and hold on one of the apps available on the apps pivot after you sign in to Azure RemoteApp and select the pin to start option. Figure 1: Press and hold menu for apps. Once an app has been pinned, it will be available on your Start Screen to be launched from a single tap. We support creating Small and Medium sized tiles. Apps can also be grouped together into folders now available on Windows Phone. This feature is in addition to the ability to pin desktops connections which was already available. Figure 2: Start Screen with pinned apps and a desktop showing different sizes and groups Background refresh and notifications To keep your list of apps up to date, we’ve enabled automatic syncing to happen while you use the app. If the list of apps available to you has changed (for example when your IT admin publishes new apps), then the list of apps on the apps pivot will be updated to reflect the changes which could include adding or removing entries. Also, if you are sent new invitations that haven’t been accepted on one of your devices, you will receive a notification letting you access the invitations page so you can decide if you would like to access the new apps (make sure you trust the invitation sender). Figure 3: Action center showing the new notification Try it today I encourage you to download the Remote Desktop Preview app from the Windows Phone store and try out the new features. You can also connect to the Azure RemoteApp service using our clients for other devices running Windows 7 SP1 and later, iOS, Mac OS X and Android by visiting the Azure RemoteApp site and clicking on Install Client in the top right. Stay tuned for future updates as we work on adding more enterprise-focused features including Remote Desktop Gateway and Remote Resources (RemoteApp and Desktop Connections) in future updates. Note: Questions and comments are welcome. However, please DO NOT post a request for troubleshooting by using the comment tool at the end of this post. Instead, post a new thread in the Remote Desktop clients forum. Thank you!

Posted by on 22 December 2014 | 11:31 am