15 August 2013

Making a Case in FogBugz

Though it isn't strictly code-related, writing cases concerns every person involved in the development process. From designers to developers to testers, writing good cases is vital to the efficiency of the project workflow.

This video on making cases in FogBugz, the bug tracking system that we use, breaks cases into three basic types of information: title, metadata, and notes.

A good case doesn't just convey the right information; it presents the information in a way that is easily readable and lacks ambiguity. A well written and laid out case saves time for every subsequent reader.

Posted by Tonya Ross at 03:00 PM

27 June 2013

New Win32 APIs in Windows 8.1 Preview

The new Windows Runtime APIs in Windows 8.1 are well-documented on MSDN, but there is no such list for Win32 APIs. Here's what's coming, based on the Windows 8.1 SDK.

Devices

New DeviceIoControl commands:

  • FSCTL_QUERY_REGION_INFO (uses new FSCTL_QUERY_REGION_INFO_INPUT, FSCTL_QUERY_REGION_INFO_OUTPUT, FILE_STORAGE_TIER_REGION structs)
  • FSCTL_QUERY_SHARED_VIRTUAL_DISK_SUPPORT
  • FSCTL_QUERY_STORAGE_CLASSES (uses new FILE_STORAGE_TIER, FILE_STORAGE_TIER_MEDIA_TYPE structs)
  • FSCTL_SVHDX_SYNC_TUNNEL_REQUEST
  • FSCTL_USN_TRACK_MODIFIED_RANGES
  • SCRUB_PARITY_EXTENT and SCRUB_PARITY_EXTENT_DATA structs added for FSCTL_SCRUB_DATA

New VolumeFlags for FILE_FS_PERSISTENT_VOLUME_INFORMATION:

  • PERSISTENT_VOLUME_STATE_GLOBAL_METADATA_NO_SEEK_PENALTY
  • PERSISTENT_VOLUME_STATE_LOCAL_METADATA_NO_SEEK_PENALTY
  • PERSISTENT_VOLUME_STATE_NO_HEAT_GATHERING

DirectX

Windows 8.1 includes DirectX 11.2, plus many new DirectComposition and DirectManipulation APIs:

DPI Awareness

Event Tracing for Windows

  • TdhAggregatePayloadFilters
  • TdhCleanupPayloadEventFilterDescriptor
  • TdhCreatePayloadFilter
  • TdhDeletePayloadFilter
  • TdhEnumerateManifestProviderEvents
  • TdhGetManifestEventInformation

Media Foundation

Printing

New PW_RENDERFULLCONTENT flag for PrintWindow.

Processes and Threads

New enum values for UpdateProcThreadAttribute:

  • PROCESS_CREATION_MITIGATION_POLICY_PROHIBIT_DYNAMIC_CODE_ALWAYS_ON
  • PROCESS_CREATION_MITIGATION_POLICY_BLOCK_NON_MICROSOFT_BINARIES_ALWAYS_ON

Router Management

New enum values for dwLcpOptions in PPP_PROJECTION_INFO:

  • PPP_LCP_AES_192
  • PPP_LCP_GCM_AES_128
  • PPP_LCP_GCM_AES_192
  • PPP_LCP_GCM_AES_256

Touch Pads

Expand This List

The source for this list is available as a gist; please fork it and contribute any corrections or additions.

Posted by Bradley Grainger at 12:00 PM

03 June 2013

Async and Await in WPF

In our last video, we looked at using the async and await keywords in a console application. This week's video uses async and await in a WPF application.

First we create a simple event handler using async and await. Then we simulate what happens behind-the-scenes with await by implementing the same behavior using continuation tasks. Just like await, we capture the SynchronizationContext and use the Post method to run the continuation on the UI thread.

Next we use DBpedia's SPARQL endpoint to asynchronously execute a query against its structured data from Wikipedia. We then see what happens when an exception is thrown in an awaited task.

Stephen Toub has an excellent three-part article (2) (3) on await, SynchronizationContext and console apps.

Posted by Scott Fleischman at 03:30 PM

23 May 2013

TAP Using Tasks and Async/Await

With our two previous videos on Starting Asynchronous Work Using Tasks and Continuation Tasks, we are in an excellent position to use the new async and await keywords in C# 5.

This week's video converts a simple synchronous method to async following the Task-based Asynchronous Pattern (TAP) using two different implementations.

  1. The first implementation uses Tasks, continuations and Task.Delay
  2. Then second uses the new async and await keywords, resulting in code that is very similar to the synchronous version. It also uses the Task.WhenAll method to asynchronously wait on multiple tasks.

For further reading, see:

Posted by Scott Fleischman at 01:40 PM

17 May 2013

Continuation Tasks

Last week I posted a video on Starting Asynchronous Work Using Tasks. This week's video is on Continuation Tasks. Continuation tasks allow you to control the flow of asynchronous operations. They are especially useful for passing data between asynchronous operations. Continuation tasks are normally created using the Task.ContinueWith method. They also can be created using methods like TaskFactory.ContinueWhenAll.

Posted by Scott Fleischman at 02:40 PM

14 May 2013

Using native DLLs from ASP.NET apps

By default, ASP.NET uses shadow copying, which "enables assemblies that are used in an application domain to be updated without unloading the application domain." Basically, it copies your assemblies to a temporary folder and runs your web app from there to avoid locking the assemblies, which would prevent you from updating those assemblies.

This works great for managed DLLs, but not very well for native DLLs. ASP.NET doesn't copy the native DLLs, so the web app fails with an error that doesn't even make it obvious that a native DLL is missing or which one it is.

The simplest solution I've found is to turn off shadow copying, which causes the DLLs to be loaded directly from the bin directory. This is the strategy now being used in production by Biblia.com. Just add a <hostingEnvironment> element to your web.config:

<configuration>
  ...
  <system.web>
    ...
    <hostingEnvironment shadowCopyBinAssemblies="false" />

This also works for local development, but you may find that you need to restart the app pool in order to rebuild the app. Biblia has a pre-build step that runs appcmd stop apppool and a post-build step that runs appcmd start apppool with the corresponding /apppool.name argument.

Alternatively you could consider removing <hostingEnvironment> from your local web.config and putting your bin folder in the PATH system environment variable, but that will be problematic if you have multiple web apps that depend on different builds of the native DLLs.

Posted by Ed Ball at 10:06 AM

10 May 2013

Starting Asynchronous Work Using Tasks

As multi-core processors are quickly becoming ubiquitous, it becomes increasingly important to use parallel and asynchronous programming techniques to create responsive, high-performance applications. The latest .NET releases have responded to this need by introducing the Task Parallel Library (TPL) in .NET 4, and the async/await keywords in C# 5.

We have created a set of fast-paced, code-driven videos on asynchronous programming in C# using TPL and async/await, with a focus on the Task-based Asynchronous Pattern (TAP). If you want a concise introduction to Tasks and async/await, these videos are for you! The videos are under 5 minutes each, and are intended to give a quick overview of each subject. The accompanying blog posts have links for further study.

This first video shows how to start asynchronous work using the Task.Run method, which returns a Task or Task<TResult>. The video also shows how to create tasks that are not run on any thread using TaskCompletionSource<TResult>.

Enjoy!

For further reading, see:

Next week's video: Continuation Tasks.

Posted by Scott Fleischman at 03:45 PM

19 November 2012

Building Code at Logos: Build Repositories

As mentioned in my last post (Sharing Code Across Projects), developers work on the head of the master branch “by convention”. This is fine for day-to-day work, but we’d like something a little more rigorous for our continuous integration builds.

For this, we use “build repositories”. A build repository contains a submodule for each repository required to build the project. (In the App1 example, the App1 build repo would have App1, Framework and Utility submodules.) The CI server simply gets the most recent commit on the master branch of the build repo, recursively updates all the submodules, then builds the code.

The problem now: how is the build repository updated? We solve this using a tool we developed named Leeroy. (So named because we use Jenkins as a CI server, and Leeroy starts the Jenkins builds. We weren’t the first ones to think of this.)

Leeroy uses the GitHub API on our GitHub Enterprise instance to watch for changes to the submodules in a build repo. When it detects one, it creates a new commit (again, through the GitHub API) that updates that submodule in the build repo. After committing, it requests the “force build” URL on the Jenkins server to start a build. Jenkins’ standard git plugin updates the code to the current commit in each submodule and builds it.

The benefit is that we now have a permanent record of the code included in each build (by finding the commit in the build repo for that build, then following the submodules). For significant public releases, we also tag the build repo and each of the submodules (for convenience).

We’ve made Leeroy available at GitHub.

Posts in the “Building Code at Logos” series:

Posted by Bradley Grainger at 01:40 PM

17 November 2012

Building Code at Logos: Sharing Code Across Projects

We often have common code that we'd like to share across different projects. (For example, our Utility library is useful in both the desktop software and in an ASP.NET website.)

One way of sharing code is to place it in its own repository, and add it as a submodule to all repos that need it. But submodules are a bit of a pain to work with on a daily basis (for example, git checkout doesn't automatically update submodules when switching branches; you have to remember to do this every time, or create an alias).

Submodules also make it difficult to “compose” libraries. For example, App1 and App2 might both use Utility, but they might also both use Framework, a desktop application framework that's not general-purpose enough to live in Utility, but is in its own repo. If Framework itself uses Utility as a submodule, then the App1 and App2 repos might contain both /ext/Utility and /ext/Framework/ext/Utility. This is a maintenance nightmare.

Our choice at Logos is to clone all necessary repositories as siblings of each other. In the App1 example above, we might have C:\Code\App1, C:\Code\Framework and C:\Code\Utility as independent repos. Dependencies are expressed as relative paths that reference files outside the current repo, e.g., ..\..\..\Utility\src\Utility.csproj. We've written a shell script that clones all necessary repos (for a new developer) or updates all subfolders of C:\Code (to get the latest code).

By convention, developers are working on the master branch on each repo (or possibly a feature branch in one or more repos for a complex feature). It’s theoretically possible for someone to push a breaking change to Utility and forget to push the corresponding change to App1 (a problem that submodules do prevent), but this happens very infrequently.

Posts in the “Building Code at Logos” series:

Posted by Bradley Grainger at 09:00 AM

16 November 2012

Building Code at Logos: Third-Party Repositories

Some of our repositories reference third-party code. In many cases, this can be managed using NuGet, but sometimes we need to make private modifications and build from source.

We accomplish this by creating a repository for the third-party code. In some cases, this repository is added as a submodule under ext in the repositories that need it; in other cases, the binaries created from the code are committed to another repository's lib folder. The decision depends on how complicated it is to build the code versus how useful it is for developers to have the source (and not just precompiled binaries).

In the third-party repository, the upstream branch contains the unmodified upstream code, while the master branch contains the Logos-specific modifications.

When cloning the repository locally, the origin remote refers to our repository containing the third-party code. If the original third-party code is available via git, then we add an upstream remote that references the original maintainer's code.

Example: Creating a ThirdParty repo from source on GitHub

# clone a local copy of the remote third-party repository
git clone https://github.com/user/ExampleProject.git
cd ExampleProject

# rename the "origin" remote (created by clone) to "upstream"
git remote rename origin upstream

# add Logos' repo as the "origin" remote
git remote add origin git@git:ThirdParty/ExampleProject.git

# use this code as the "upstream" branch
git checkout -b upstream
git push origin upstream

# work on Logos-specific modifications
git checkout master

** make modifications

git commit -am "Some important changes."

# push the changes to our repo
git push origin master

Example: Creating a ThirdParty repo from source in Subversion

# create the git repo
mkdir ExampleProject
cd ExampleProject
git init

# add Logos' repo as the "origin" remote
git remote add origin git@git:ThirdParty/ExampleProject.git

# seed it with the upstream code
svn export --force http://source.example.org/repos/example/tags/1.0 .

# add all the code
git add -A
git commit -m "Add Example 1.0"

# use this code as the "upstream" branch
git checkout -b upstream
git push origin upstream

# work on Logos-specific modifications
git checkout master

** make modifications

git commit -am "Some important changes."

# push the changes to our repo
git push origin master

Once the repository is created, we will want to update it with new versions of the third-party code when they are released (then merge in our changes).

The new code gets committed to the upstream branch, then that gets merged into master. If necessary, conflicts are resolved, or our changes are edited/removed to reflect changes in the upstream code.

Example: Updating a ThirdParty repository from source on GitHub

# switch to the "upstream" branch, which contains the latest external code
git checkout upstream

# get the latest code from the "master" branch in the "upstream" repo
git pull upstream master

# switch to our local "master" branch, which contains Logos changes
git checkout master

# merge in the latest upstream code
git merge upstream

** fix any conflicts, and commit if necessary

# push the latest merged code to our repo
git push origin master

Example: Updating a ThirdParty repository from source in Subversion

# switch to the "upstream" branch, which contains the latest external code
git checkout upstream

** delete all files in the working copy, except the '.git' directory

# get the latest version of the third-party code
svn export --force http://source.example.org/repos/example/tags/1.1 .

# add all files in the working copy, then commit them
git add -A
git commit -m "Update to Example 1.1."

# switch to our local "master" branch, which contains Logos changes
git checkout master

# merge in the latest upstream code
git merge upstream

** fix any conflicts, and commit if necessary

# push the latest merged code to our repo
git push origin master

Posts in the “Building Code at Logos” series:

Posted by Bradley Grainger at 09:50 AM