If you are working with Azure, you need to check out the Azure PowerShell Cmdlets.
powerful + simple = AWESOME !
Here are a few very simple samples that I use to manage my Azure VMs. (There are Cmdlets to cover all of the Azure features and I’ll cover managing Cloud Services in a separate post.)
Figure: “Get-AzureVM“. It lets you easily see what VMs you currently have provisioned (i.e. Are being charged for).
Figure: The Stop command let’s you de-provision a VM so you aren’t charged for it while you aren’t using it. I schedule this to run on each of my dev VMs.
No more $200 Azure bills for me because I forgot to shut down my large instance !
Figure: Of course you can start your VMs as well.
In case you weren’t sold… here is a list of a few of my other favourites.
Get-AzureRole (List your roles)
Get-AzureService (List cloud services)
Set-AzureRole (Sets the # of instances)
Get-AzureVM (Get VM info)
For more info:
Check out MSDN http://msdn.microsoft.com/en-us/library/jj156055.aspx
If you are a ReSharper user (and you should be), check out the new dependency graph. It is awesome for easily getting a high level view of the dependencies between projects and layers in your solution.
Figure: ReSharper 8 introduces a dependency graph to its architecture tools.
Figure: I structure my solution to reflect the Onion Architecture. http://rules.ssw.com.au/SoftwareDevelopment/RulesToBetterMVC/Pages/The-layers-of-the-onion-architecture.aspx
I have layers for UI, Business Logic Interfaces, Repository Interfaces and the Domain Model. I then inject my dependencies into these layers.
I like to structure the dependencies under a different solution folder so as to emphasise that the dependencies exist outside of the application core.
Figure: R# now generates a dependency graph of your solution (or a selected part of your solution) and by default groups the projects by Solution folders.
I love this because with a few clicks I can get a very clear idea of the dependencies between the different layers in my solution, and see where references exist to dependency projects.
I unselected three items to remove noise from the diagram: the Solution Items folder (which contains the deployment project and documentation), the Common folder (which contains cross-cutting concerns) and the Dependency Resolver project which configures the IOC container.
Figure: I generated the ReSharper dependency diagram as preparation for the first Sprint Review meeting and immediately noticed a dependency from my Client (UI) layer to a ‘Dependency’ project.
No No No No No !!!
Figure: We refactored to inject the dependency into the application core and removed the reference to Data.FileStream from the UI Project.
The dependency graph now looks awesome ! There are no lines from the Clients window to the Dependencies window!
Figure: As a comparison, this is how the Visual Studio Dependency Graph looks when first created. I would usually then remove the outlined items to remove noise.
Figure: After removing the extra projects, my Visual Studio Dependency Graph is more readable, but I would love to see the ability to group projects by Solution Folder.
Figure: The Visual Studio architecture tools are more complete and advanced than the ReSharper ones at present and the Layer Diagram is an invaluable tool that allows you to specify all the layers of your solution, assign projects and classes to particular layers and then have the architecture validated when you build.
What I love about the ReSharper dependency graph is how easy it makes it to get a high level overview of my solution.
It also has the ability to track changes to your architecture as your project progresses, and to indicate metrics. I’ll let you know how these features work out for me as the project progresses.
The originators of Scrum, Ken Schwaber and Jeff Sutherland have released an updated version of the Scrum Guide.
Go here to get the latest version
Go here for their summary of the changes
Go here for a 17 min video of them discussing the changes.
Here are my key points taken from the video, and the summary page, and a few comments about them.
Interesting points on the uptake of Scrum
- two years ago the US Congress passed a law that all Department of Defence IT Projects must be agile
- this year.. the US embedded the Agile Manifesto in its government regulations
- The US post office has mandated Scrum everywhere in IT
- The Gartner group says ‘abandon waterfall.. get agile’
-Jim Johnson at the Stannis group repots: In a survey of 50-100,000 projects where success is defined as on time, on budget with happy customers (in itself a waterfall benchmark) -
The success rate in waterfall projects is 14%
The success in agile is 42% (Jeff believes this is not great.. but a conservative figure)
- The Forster report: next year there will be 3 trillion dollars spent on software
Ken: “our way of trying to narrow the gap between software that is needed and the available provisioning capacity is
- higher productivity
- higher value
- higher quality”
The 6 key changes to the 2013 Scrum Guide
1. Re-emphasising Transparency
A new section on Artefact Transparency has been added.
The key point made was that Ken and Jeff wanted to make it clear that if things are not visible to everyone, bad decisions can be made.
All of the Scrum artefacts should be transparent, and easily understood by everyone who is looking at them in order to maximise value and minimise risk.
2. Sprint Planning – One Section and a Sprint Goal
The sprint planning meeting is no longer divided into two sections.
The concept of the Sprint Goal was in the 2011 scrum guide, but it wasn’t clear enough and not enforced enough.
In the current edition, you must come out of sprint planning with a sprint goal.
The sprint goal should provide focus to the team.
3. Product Backlog – being ‘Ready’
The term ‘Grooming’ has been replaced with the term ‘Refinement’ (due to cultural sensitivity).
The ‘Ready’ state is being emphasised (as has previously been done with the ‘Done’ state)
Items that are ‘Ready’ are defined clearly enough and with enough detail to be able to be added to a sprint.
BEFORE the sprint planning meeting, PBIs that may be added to the sprint should be refined until they are ‘Ready’. This will accelerate the sprint planning meeting.
I think this is awesome. As a consultant, one of the issues I commonly find on Scrum teams is that the Sprint Planning meeting drags.
I always push back on Product Owners that if they want punchy Sprint Planning meetings, they should prioritise having a well groomed backlog with all of the BPIs likely to be included in the next sprint ready for estimation and inclusion.
4. Time Boxed Events
When you set the length of the sprint it does not change, but the meeting times specified are maximums.
(I don’t see this as a big change).
5. Daily Scrum – a Planning Event
The new guide reinforces the importance of the Daily Scrum as a PLANNING event, not just a status event.
It provides a great focus on each team member contributing and delivering value.
Ken: ‘It is about creating situations for the team to work together’
Every day the team should organise how they will work together to accomplish the sprint goal
The input to the meeting is how well the team is going.
The output from the meeting is a new or revised plan that optimises the teams efforts.
The three questions have been reformulated to emphasise the team over the individual.
a. What did I do yesterday that helped the Dev Team meet the Sprint Goal?
b. What will I do today to help the Dev Team meet the Sprint Goal?
c. Do I see any impediment that prevents me or the Dev Team from meeting the Sprint Goal ?
There is a greater focus on the sprint goal, rather than just on the status of what ‘I’ did.
Only tasks that contribute to the sprint goal are relevant.
A great example is given
- if a team member says ‘I spent the day writing this great report’ but the report does not contribute to the sprint goal
a. it should not have been included in the daily scrum (as it does not contribute to the sprint goal)
b. if they team member was busy, but did not contribute to the sprint goal, as far as the daily scrum is concerned… they did nothing
c. it could actually be considered an impediment as the team member is not focused on the sprint goal
6. Delivering Value
The concept of value is reinforced.
The product backlog should be prioritised by value.
At the end of the sprint review, one of the primary outputs should be a refined sprint backlog that will optimise the VALUE of the work the team is doing.
During the sprint review – the question is asked ‘based on what was done in the sprint what are the next things that could be done to optimise value ?’
The point is to maximise the value that the team delivers.
I say: Bravo !
Great quote: “We’ve got goals, we’ve got value, we’ve got transparency, we’ve got more teamwork ..it’s all good.”
The default Agile TFS template ships with three states: New, Active and Closed.
A common question that I am asked is how to add an extra stage to the TFS taskboard.
While this is not trivial in TFS 2012, it’s really not that hard once you know how, and is being made easier in newer versions of TFS.
Figure: We will demonstrate adding a ‘Testing’ column.
Step 1: Ensure that you are an administrator of the Team Project you are updating
Step 2: Download the Team Foundation Server 2012 Power Tools
- In Visual Studio select Tools, and then Extensions and Updates
Figure: Select Online from the left menu, enter Team Foundation 2012 in the search field, click the Download button on Microsoft Visual Studio Team Foundation Server 2012 Power Tools
Step 3: Export the Task Work Item Type
To add a new column to the task board, we need to add that status to the work item type definition.
Figure: From the Tools menu select, Process Editor, then Work Item Types and then Open WIT from Server
Figure: Expand the correct Team Project and select the Task work item type.
Step 4: Add the Testing state to the Task WIT
Figure: Select the Workflow tab. Open the toolbox and drag a State component onto the design surface. Right click on the new State, select Properties and set the Name property to Testing
Figure: Select the Transition Link component from the toolbox. Now click on the Active state and drag your mouse to the Testing state.
Figure: A transition will have been added from Active to Testing.
Figure: Right click on the Transition and choose Open Details. Go to the Reasons tab and Edit the reason value. Suggested test: ‘Ready for Testing’. You can click on the chevrons to expand the transition to be able to more clearly see the assigned properties.
Additional actions and Fields can also be specified but that is beyond the scope of this post.
Figure: Repeat the process above to add transitions from Testing to Complete (with a reason of ‘Testing Complete’) and from Testing back to Active (with a reason of ‘Failed Testing’).
Figure: Save the template to a known location on your hard drive so that it can be imported in the next step. E.g. c:\Temp\TeamProjectName_Task.wit
Step 5: Import the saved WIT
Figure: From the Tools menu select Process Editor, then Work Item Types and then Import WIT.
Figure: Browse to the location of the saved file, select the Team Project you wish to import the WIT into and click OK.
Figure: When you edit a task, the Testing status is now available. It is not yet however added to the board.
Step 6: Export the Process Template Config
This is the part that I always forget to do. After you have edited the Work Item Type, you still need to update the process template to include the State on the Task Board.
Figure: Open a command prompt, change to the Visual Studio IDE Folder and execute the following command
witadmin exportcommonprocessconfig /collection:CollectionURL /p:ProjectName /f:”DirectoryPath\CommonConfiguration.xml”
for our instance the command required was
witadmin exportcommonprocessconfig /collection:http://ourserver:8080/tfs/CollectionName /p:ProjectSparrow /f:”c:\Temp\CommonConfiguration.xml”
Step 7: Edit the Process Template Config
Figure: Edit the exported file. Find the section for TaskWorkItems and add the line highlighted above.
<State type=”InProgress” value=”Testing” />
Save the file.
Step 8: Import the Process Template Config
Figure: Execute the following command
witadmin importcommonprocessconfig /collection:CollectionURL /p:ProjectName /f:”DirectoryPath\CommonConfiguration.xml”
for our instance the command required was
witadmin importcommonprocessconfig /collection:http://ourserver:8080/tfs/CollectionName /p:ProjectSparrow /f:”c:\Temp\CommonConfiguration.xml”
Figure: View your task board and you will have your new column!
I had a great chat with John Papa about building Single Page Applications to get a fantastically responsive web UI.
I love continuous deployment to Windows Azure from Team Foundation Service! http://www.windowsazure.com/en-us/develop/net/common-tasks/publishing-with-tfs/
Often though we need the flexibility of building and working directly with the WebDeploy package.
Using a precision mocking framework (such as Moq or NSubstitute) encourages developers to write maintainable, loosely coupled code.
Mocking frameworks allow you to replace a section of the code you are about to test, with an alternative piece of code.
For example, this allows you to test a method that performs a calculation and saves to the database, without actually requiring a database.
There are two types of mocking framework.
The Monster Mocker (e.g. Microsoft Fakes or TypeMock)
This type of mocking framework is very powerful and allows replacing code that wasn’t designed to be replaced.
This is great for testing legacy code (tightly coupled code with lots of static dependencies) and SharePoint.
Figure: Bad Example – Our class is tightly coupled to our authentication provider, and as we add each test we are adding *more* dependencies on this provider. This makes our codebase less and less maintainable. If we ever want to change our authentication provider “OAuthWebSecurity”, it will need to be changed in the controller, and every test that calls it.
The Precision Mocker (e.g. Moq)
This mocking framework takes advantage of well written, loosely coupled code.
The mocking framework creates substitute items to inject into the code under test.
Figure: Good Example – The code is loosely coupled. The controller is dependent on an interface, which is injected into the controller via its constructor. The unit test can easily create a mock object and substitute it for the dependency. Examples of this type of framework are Moq and NSubstitute.
The article discusses setting up Azure VMs to run Active Directory, as an alternative to using ‘Windows Azure Active Directory’
Key points in this article
- “To extend AD services such as directory and authentication to VMs in Azure, an architect can now start to include Domain Controllers (DCs) and Read-only DCs (RODCs) in Azure as part of a design or solution.”
- “Microsoft lets you BYON (bring your own network) into Windows Azure, so it’s technically feasible to securely connect on-premise, WAN, and private cloud networks with Azure virtual networks.”
Reasons to have a DC or RODS in Azure
1. Latency: Latency in the AD authentication between on-premise and cloud networks can cause timeout issues (e.g. authentication timeouts) and issues for demanding applications.
2. Resiliency: Having a DC/RODC in Azure ensures the cloud environment continues to function if the connection to the on-premise DCs fail.
3. Cost: “Azure download bandwidth charges are saved by keeping AD-related network traffic such as DNS and LDAP in the cloud. There is no charge for uploads into Azure, so an RODC in Azure, which has no outbound replication channel, will save money compared to having Azure VMs use the Azure virtual network for all AD traffic.”
4. You need a stand-alone AD: “A self-contained AD that lives only in your Azure cloud might provide directory and authentication services to elastic clusters or farms of computers that have no need for authentication with an on-premise AD.”
Create an AD site in Windows Azure
- Azure is really just a huge network of VMs. Configuring AD in Azure is almost the same as hosting AD on VMs on-premise….
“general precautions about ensuring AD recoverability when AD is deployed on VMs apply to VMs in Azure.”
- Issues discussed.. dynamically-assigned network addresses, defining subnets
Provision a DC with the Azure data disk type
- Important details about provisioning a DC:
o “You must add an additional disk to the Azure VM that will be a DC, before running DCPROMO. This second disk must be of the “data” type, not the “OS” type. The C: drive of every Azure VM is of the “OS disk” type, which has a write cache feature that cannot be disabled. Running a DC with the SYSVOL on an Azure OS disk is not recommended and could cause problems with AD.”
o “This means you must not perform a default installation of DCPROMO on an Azure VM, but rather you attach a data disk, then run DCPROMO and locate AD files such as SYSVOL on the data disk, not the C: drive. This link at Microsoft has checklists to add an Azure VM data disk or attach an empty data disk: http://www.windowsazure.com/en-us/manage/windows/how-to-guides/attach-a-disk/“
Alternative Option: “Windows Azure Active Directory” product
- “Windows Azure Active Directory” – is a separate product
- It is an alternative to setting up Azure VMs to run AD (what we are talking about in this article)
- It is an outsourced AD that lives completely and only in the cloud.
- It appeals to Microsoft Office 365, Dynamics CRM Online, and Windows InTune customers
Follow Up Reading
- Guidelines for Deploying Windows Server Active Directory on Windows Azure Virtual Machines http://msdn.microsoft.com/en-us/library/windowsazure/jj156090.aspx
- BYON into the public cloud with Azure Virtual Networks
I had a conversation today with a lead developer who was working with a team who couldn’t get the hang of not checking in bad code. To resolve the issue, he implemented Gated Checkins. He asked me to check out some the code and I was happy to, right up until I had to do a few checkins. The following are my subsequent thoughts on the matter.
Gated checkins are used to stop developers from checking in bad code and breaking the build.
This does not contribute to high functioning teams, and instead masks, or even creates dysfunction.
To illustrate lets look at a couple of examples.
In the retro the team decides to turn gated checkins on because Jonny and Sue keep breaking the build.
The build doesn’t get broken any more, because Jonny and Sue now have to fix their code before they check it in.
This doesn’t mean that Jonny and Sue are writing better code, it just means that they are not checking in code that breaks the build.
Gated checkins will not improve their skill level, change their attitude or improve the quality of their code.
The development ninjas on the team are proud of their code, and check in several times per day.
Because the gated checkin takes 10 minutes their workflow is impacted.
They resent Jonny and Sue for having to work this way.
Gated Checkins mask the dysfunction on the team, and introduce impediments to the high performers.
Example – Gated Checkins mask dysfunction
In the retro the team discusses the fact that the build is often broken.
After a round table discussion about becoming better programmers and building better quality software, the team decides to the following guidelines:
1. The team will all run build notifications so that everyone knows when, and by whom the build is broken.
2. If someone needs help with solving a problem, they are going to feel good about asking for help early, and learning something new in the answer.
3. If someone is asked for help, they will gladly share their knowledge to ensure that the quality of the project is maintained ,and the team help each other to become better developers.
4. Before checking in, the devs will compile and run all tests. **
5. If someone checks in and does break the build, they will call out to all members of the team that the build is broken so that no-one gets latest. They will fix the build IMMEDIATELY, and then call out again when it is fixed.
(Some teams have a rule that if you break the build three times you have to shout coffee / lunch).
6. The team agrees that you don’t go home if the build isn’t green.
If it comes to the end of the day and you are not sure your code will not break the build – do not checkin. Create a shelveset and resolve the issue properly the next day.
If you have checked in, the build is broken, and you cannot fix it before going home, you must email all devs on the team, and the product owner with an explanation.
7. The status of the build is reviewed in every daily scrum. Edit: Gerard correctly pointed out that this is bad Scrum.
Example – The whole team should be constantly aware and invested in the status of the build, the quality of the software and in encouraging each other to better developers.
** I actually don’t follow this rule when working on small teams of awesome devs, who write code against tests and checkin frequently.
Instead I encourage the process to be
- checkin 4-5 times a day
- write lots of tests
- if the tests that you are working against pass- checkin and let the build server do a full compile and run all the tests
- if you have broken the build, call it out, fix it immediately and then call it out again.
This is the most productive way for small teams of awesome developers to produce great code… and it’s fun !