Category Archives: .NET

Hello World in Visual Studio Code on Linux

This post adds a sample as addition to my post about VS Code previous post

Setup

  1. Install latest mono as described at http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives.

  2. Install Visual Studio Code from https://code.visualstudio.com Just unpack and start.

Simple Scenario (no debug and no IntelliSense)

Code

./program.cs

public static class Programm
{
    public static void Main()
    {
        Console.WriteLine("Hello Mono and VS Code!!!");
    }    
}

Project configuration

For simple applications without debug support you can skip creation of project.json or any other file like this.

Build configuration

./.vscode/tasks.json

In following task configuration we use msc (mono C# compiler) as build tool with our single code file as an argument and msCompile problem matcher.

{
    "version": "0.1.0",
    "command": "mcs",
    "isShellCommand": true,
    "showOutput": "silent",
    "args": ["program.cs"],
    "problemMatcher": "$msCompile"
}

Launch configuration

Just press F5 and VS Code will auto-generate launch.json that we need.

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch",
            "type": "mono",
            "request": "launch",
            "program": "${workspaceRoot}/program.exe",
            "args": [],
            "cwd": "${workspaceRoot}",
            "runtimeExecutable": null,
            "env": {},
            "externalConsole": false
        },
        {
            "name": "Attach",
            "type": "mono",
            "request": "attach",
            "address": "localhost",
            "port": 5858
        }
    ]
}

And that’s it

  • Ctrl+Shift+B to build
  • F5 to run and see output in debug console

VS Code Debug Console

Complete scenario

If you need to add debugger and IntelliSense support to simple project described above just add project.json

Additional setup

To use project.json we need to install DNX as fas as project.json is part of dnx build system. Run following commands to install DNX for mono:

curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
dnvm upgrade -r mono

./project.json

Below is a simple project.json file that includes all *.cs file in all subdirectories and use dnx451 as framework. Taking into account that we have configured DNX to use mono dnx451 means mono in our case.

{
    "configurations": {
        "Debug": {
            "compilationOptions": {
                "define": ["DEBUG", "TRACE"]
            }
        },
        "Release": {
            "compilationOptions": {
                "define": ["RELEASE", "TRACE"],
                "optimize": true
            }
        }
    },
    "frameworks": {
        "dnx451": {
            "frameworkAssemblies": {
                "System": ""
            }
        }
    },
    "dependencies": {
    },
    "compile": "*/**/*.cs" 
}

After that you can navigate in code use IntelliSense but you still not able to debug your program because mcs does not produce *.mdb by default. To fix this problem just –debug to mcs arguments in tasks.json

./taks.json

{
    "version": "0.1.0",
    "command": "mcs",
    "isShellCommand": true,
    "showOutput": "silent",
    "args": ["program.cs","--debug"],
    "problemMatcher": "$msCompile"
}

Now you can work with all functionalities of VS Code. Just press F5 and start debugging!

VS Code Debugger

Complex projects

For complex projects just use your favorite build tool in tasks.json (see [prev post] for more details about tasks.json)

Some useful links

Mono Project

Visual Studio Code

DNX

Project File Description

global.json

Schema for tasks.json

task.json description

Debugging in Visual Studio Code

Version Control in Visual Studio Code

Visual Studio Code on Linux

Microsoft declares that new version of .Net and new alternative dev tool Visual Studio Code will be available for multiple platforms, including Linux.

In this post I will try to describe my Visual Studio Code usage experience. I will not describe .Net Code or DNX or Mono in details and focus on Visual Studio. I will use mono because .Net Core/DNX currently is incomplete and debugging using it is highly complicated under Linux. So I decided to use mono for now and switch to newer technologies later. I’m currently using Ubuntu 15.10 but all described things should work in the same way in 14.04 and any other Debian based on Linux.

First of all, you will need to setup latest mono version. In Canonical repository you always will find old version, not sure why maybe some stability consideration but I’m not sure that Canonical guys test mono 🙂

Any way to install latest mono go to http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives and follow the instructions about how to add mono repository and install latest version.

Next let us install Visual Studio Code. You can download latest version here https://code.visualstudio.com/ Downloaded file is just archive with application no installation process required just unpack and start.

Also for some VS Code functionality you will need DNX. Install it according to http://docs.asp.net/en/latest/getting-started/installing-on-linux.html or run following commands to install DNX for mono:

curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
dnvm upgrade -r mono

Projects

There is no projects and solutions as it was in usual Visual Studio. General idea of visual studio code is that project folder contains all project related files and only project related files. Also all project related files are programs on some human readable language (C#, JS, JSON, etc.) no more magic files with magic GUIDs. Thus you can only open project folder not project file with VS code and configure you project with any text editor, you can merge project configuration with merge tool, you can parse project configuration with some automation tools or do any other task based on documented and clear configuration files.

We still need project description files

If you open folder with code and with no project files in VisualStudio, you will be able to use VS Code as smart text editor and nothing more. However any modern IDE should have code suggestions, code navigation, in-place error highlighting, debugging and so on. Be sure VisualStudio Code supports this features and support them on higher level than VS 2015 Community Edition. But to enable all these features you have to explain VS some details about your code – create project file.

What files can be used to configure project?

Old project and solution files

VS Code supports *.sln and project files. You cannot open solution file but code parsing services will be able to locate and read solution/project files when you open solution folder.

./**/project.json

File named project.json is the main project configuration file. You can have several subprojects in your project and configure every project separately with project.json, in case of .net every project file will produce an assembly. See example below.

   "configurations": {
        "Debug": {
            "compilationOptions": {
                "define": ["DEBUG", "TRACE"]
            }
        },
        "Release": {
            "compilationOptions": {
                "define": ["RELEASE", "TRACE"],
                "optimize": true
            }
        }
    },
    "frameworks": {
        "dnx451": {
            "frameworkAssemblies": {
                "System": "",
                "System.Runtime": ""
            }
        }
    },
    "dependencies": {
        "Newtonsoft.Json": "8.0"
    },
    "compile": "*/**/*.cs" 
}

This configuration file describes two configurations Debug and Release with different optimization settings and specific define directives, defines one framework – dnx451 with used framework assemblies and specifies required nuget packages in “dpendencies” section. Compile section says that project should include all *.cs files in all subdirectories (/**/ – means any subdir).

Please note that project.json file is a part of DNX build system and you have to install DNX to make it work.

Full specification of project file can be found here https://github.com/aspnet/Home/wiki/Project.json-file.

./global.json

If you have several projects, you can group them together and explain VS Code that all project.json should be treaded as parts of some solution with global.json file.

One of my global.json file looks like following:

 {
   "projects": [
    "Guardian.Common",
    "Guardian.Service",

    "Guardian.Module.BoilerMultiRoom",
    "Guardian.Module.RealtimeProvider",
    "Guardian.Module.Watering",
    "Guardian.Module.Update",
    "Guardian.Module.Video",

    "Guardian.Web.Common",
    "Guardian.Web"
    ]
 }

It just contains a list of all projects. You can find description of global.json file here http://docs.asp.net/en/latest/conceptual-overview/understanding-aspnet5-apps.html#the-global-json-file

./vscode/tasks.json

Here is an example of tasks.json file of one of my real project that have mono back-end and typescript/html/less front-end.

{
    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": ["--no-color"],
    "tasks": [
        {
            "taskName": "default",
            "isBuildCommand": true,
            "showOutput": "silent",
            "problemMatcher": ["$tsc", "$lessCompile",
            {
                "owner": "cs",
                "fileLocation": "relative",
                "pattern": {
                    "regexp": "^\\S(.*)\\((\\d+),(\\d+)\\):.*(error|warning)(.*)$",
                    "file": 1,
                    "line": 2,
                    "column": 3,
                    "severity": 4,
                    "message": 5
                }
            },
            {
                "owner": "general",
                "fileLocation": "relative",
                "pattern": {
                    "regexp": "(error)(ed after)",
                    "file": 1,
                    "severity": 1,
                    "message": 1
                }
            }]
        },
        {
            "taskName": "publish",
            "showOutput": "always",
        }
    ]
}

As you can see, there are some global settings like command and args. Command in this file means what command should be executed to perform build actions. And yes, the command is global for all tasks. The command can be configured only once because with command you should specify build tool like msbuild, make or gulp in my case and every command is a target for build tools.
Actual command line will look like .
Default task in my sample has isBuildCommand=true this means that VS Code should use this to build my project. You can execute build task by Ctrl+Shift+B shortcut.
To execute other tasks you can press F1 and then type Run Task followed by Enter. This will list all available tasks. Select one and press Enter to execute the task.
To parse result of any task you can specify problemMatcher. Problem matcher is just a pattern to extract build errors, warning and any other messages. All extracted errors are shown as error list in VS Code and are shown in-place in your code as it was in usual VS. You can use one of existing problem matchers or define your own with regular expression pattern.

Some of available problem matchers

  • $msCompile – Microsoft compiles (C# or C++)
  • $lessCompile – Less files compiler
  • $tsc – TypeScript compiler
  • $gulp-tsc – TypeScript compiler implemented as gulp task

Some notes about single command for all tasks and build tools

At first I was thinking as experienced user of usual Visual Studio Enterprise where we have a build and a lot of other “crunches” that allow us to automate tasks – “WTF same command for all tasks???”. But later I noted that usually we have to write another “build-crunches” to execute all this “crunches” on build machine. In VS Code you are configuring build stages(targets) as tasks and should perform them thought your build tool. That is a kind of DRY principle applied to build scripts. Write any task and you will be able to use it build machine.

VS Code tasks are powerful tool that allows us to use any build system and integrate it with code editor.

For more information about tasks see following links.

https://code.visualstudio.com/docs/editor/tasks_appendix

https://code.visualstudio.com/Docs/editor/tasks

./vscode/launch.json

lunch.json describes how to execute end debug application when you press F5 key. Here is an example from one of my projects:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch",
            "type": "mono",
            "request": "launch",
            "program": "./publish/Guardian.Service.exe",
            "args": [],
            "cwd": "./publish/",
            "env": {}
        },
        {
            "name": "Attach",
            "type": "mono",
            "request": "attach",
            "address": "localhost",
            "port": 5858
        }
    ]
}

Currently VS Code supports only two lunch configurations: “Launch” and “Attach” to start debugging and attach to already started process. You can specify type of application, currently only “mono” or “node” are supported under Linux and specify program to start or host/port to attach debugger to.

Unfortunately there is no way to debug .Net Core in VS Code under Linux now. Hope to see it in the nearest future. The most significant problem here is .pdb/.mdb files problem. VS Code for Linux supports mono code mapping files and .Net Core supports usual pdb files. Hope it is just a question of time as far as under Windows VS Code can debug .Net applications.

Ensure that project description is parsed correctly

When you open folder with just project.json file, VS Code automatically parses this file and enables functionality like suggestions and code navigation. In this case you will see “Running” status bar indicating that VS Code is parsing project file.

VS Code Is Parsing Project File

When project is parsed successfully you will see status like following.

VS Code Project File

By clicking on project name in status bar you can select project file manually.

In some cases VS Code will not be able to select project file. In such case it will show “Select project” green text in status bar and you will have to select project file manually.

When project file is parsed by VS you will be able to use functionality like code navigation, suggestions, etc.

VS Code Navigation

Built-in GIT support

VS Code has built-in git client. That allows you to perform simple git tasks like push/pull, rebase, commit, select files for commit, revert specific files and so on. More complex tasks like view history and merge conflicts look not very nice in current version of VS Code and are more likely you will use some external tools for these tasks.

VS Code Git Support

Read more at https://code.visualstudio.com/Docs/editor/versioncontrol

Other languages and technology support

Visual studio code supports a lot of languages except C# and support of some languages is even better that in VS Enterprise

Features Languages
Syntax coloring, bracket matching Batch, C++, Clojure, Coffee Script, Dockerfile, F#, Go, Jade, Java, HandleBars, Ini, Lua, Makefile, Objective-C, Perl, PowerShell, Python, R, Razor, Ruby, Rust, SQL, Visual Basic, XML
Snippets Groovy, Markdown, PHP, Swift
IntelliSense, linting, outline CSS, HTML, JavaScript, JSON, Less, Sass
Refactoring, find all references TypeScript, C#

Summary

  • You can use Visual Studio Code write, refactor and debug .Net/Mono (C#) code under any OS
  • Support of other languages makes VS Code highly efficient tool for mixed projects with TypeScript, Less and C# code for example.
  • Support of custom build tools adds more value to VS Code as tool for complex mixed projects
  • All project configurations are human readable JSON that can be easy maintained
  • VS Code has built in git support that solves 90% of tasks
  • Usage of VS Code requires another point of view on development – usage of easy to understand config files instead of wizards

See my next post for sample project.

Some useful links

Mono Project

Visual Studio Code

DNX

Project File Description

global.json

Schema for tasks.json

task.json description

Debugging in Visual Studio Code

Version Control in Visual Studio Code

DNX, .Net Core, ASP.Net vNext, who is who?

I’m writing this blog after we have done several projects (some of them were commercial, some internal) with these technologies at Tesseris Pro and discovered a lot of things that are not covered by documentation.

Let’s try to understand the place of every project on the global picture

Many of us already know about new version of .Net. There are a lot of resources saying that it will be open source, will run on Linux and OS X without mono and a lot of other things. And some statements can disappoint because they conflict with each other.

Let’s review available projects

At first let’s understand what DNX and .Net Core are and how they relate to each other.

  • DNX is a Dot Net Execution Environment. As ASP.Net vNext says it’s “…a software development kit (SDK) and runtime environment that has everything you need to build and run .NET applications for Windows, Mac and Linux … DNX was built for running cross-platform ASP.NET Web applications…”. And that’s right with dnu (Dnx utility) you can build projects.
  • .Net Core is a “… cross-platform implementation of .NET that is primarily being driven by ASP.NET 5 workloads… The main goal of the project is to create a modular, performant and cross-platform execution environment for modern applications.”(see .Net Core)

Hm… two projects from MS with the same goals and the same features. And yes that’s true DNX and .Net Core currently give us almost the same functionality. And these two sites together with ASP.Net and VS Code web site bring a lot of misunderstanding about what the next .Net next version is. What is the reason for it? The answer is here (https://github.com/dotnet/cli/blob/master/Documentation/intro-to-cli.md) “We’ve been using DNX for all .NET Core scenarios for nearly two years… ASP.NET 5 will transition to the new tools for RC2. This is already in progress. There will be a smooth transition from DNX to these new .NET Core components.” Looks like DNX will be replaced by tools from .Net Core.

Ok what about .Net Framework 4.6 and Mono? .Net Framework (https://www.microsoft.com/net) will continue its evolution as framework with WPF and other windows specific stuff and it will be compatible with .Net Core. It means that it will not duplicate core functionality but instead of it will offer additional services. And as it was before most interesting things like WPF will be only MS Windows compatible. The same story with mono I think.

Let’s summarize

.Net Core – set of cross-platform tools to build applications, cross-platform execution tools, set of cross-platform core libraries (like System, System.Runtime, System.IO, etc.)

DNXobsolete (at least for ASP .Net 5) set of cross platform tools and runtime environment almost the same feature-set as in .Net Core

.Net Framework – set of libraries to develop windows desktop and web application some of assemblies may be cross-platform as far as assembly format is the same in all described technologies.

Mono – set of libraries that partially replaces .Net Framework under Linux and OS X, execution tools and build tools.

Assembly format is the same so mono can execute Core or Framework assembly and vice versa. The most significant problem except P-Invokes to Win API is references. Currently all described frameworks have different functionality distribution across assemblies. So sometimes you will not be able to start application because application will search for class C in assembly A, but in actual runtime class C will be located in assembly B.

Some additional notes about build tools

Both .Net Core and DNX have new project file format project.json. It aimed to use file system structure as project structure and allows to build application for different platforms at the same time. As result you will have set of assemblies that are referencing correct assembly for every class.

Both tools work on Linux and OS X (OS X was not tested by me yet).
One of significant problems is debugging under Linux now. To debug application we need .pdb (.mdb) file that binds binary assembly with source code files. DNX tools are not able to produce any debug files, .Net Core tools can produce *.pdb files but VS Code and MonoDevelop need *.mdb under Linux to debug. So now it’s better to use mono under Linux if you would like to debug 🙂 Even if you are going to use VS Code.

One of important things is that .Net Core build tools can produce native small Linux executable to start application without “mono app.exe”.

My next blog will be about build tools and how to setup build environment under Linux.

WPF vs. GDI+ Some additional notes.

In one of my previous posts WPF vs. GDI+ I wrote about performance of WPF and how to solve this problem. After some experiments in Tesseris Pro we’ve found more improvements to solution described in previous post. The main idea is that when you are converting GDI bitmap to WPF bitmap it requires memory allocation and deceases performance. And fortunately there is solution that allows to map WPF bitmap to GDI bitmap so when we are drawing on one bitmap other bitmap is changing too, because they are located in the same memory.

First we will need some API calls. You will be able to read all description in MSDN but names of the functions are more than descriptive, if you know Win API of course 😉

[DllImport("kernel32.dll", SetLastError = true)]
static extern IntPtr CreateFileMapping(
                IntPtr hFile, 
                IntPtr lpFileMappingAttributes, 
                uint flProtect, 
                uint dwMaximumSizeHigh,
                uint dwMaximumSizeLow,
                string lpName);

[DllImport("kernel32.dll", SetLastError = true)]
static extern IntPtr MapViewOfFile(
                IntPtr hFileMappingObject,
                uint dwDesiredAccess,
                uint dwFileOffsetHigh,
                uint dwFileOffsetLow,
                uint dwNumberOfBytesToMap);

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool UnmapViewOfFile(IntPtr hFileMappingObject);

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool CloseHandle(IntPtr handle);

Than before creating bitmap let’s create memory mapped file as source of bitmaps:

var format = PixelFormats.Bgr32;

var pixelCount = (uint)(width * height * format.BitsPerPixel / 8);
var rowWidth = width * (format.BitsPerPixel / 8);

this.fileMapping = CreateFileMapping(
                       new IntPtr(-1), 
                       IntPtr.Zero, 
                       0x04, 
                       0, 
                       pixelCount, 
                       null);

this.mapView = MapViewOfFile(
                       fileMapping, 
                       0xF001F, 
                       0, 
                       0, 
                       pixelCount);

When we are calling CreateFileMapping with new IntPtr(-1) as first parameter windows doesn’t map some file to memory but will use system page file as source of mapping. And of course in this case we should specify size of file with width * height * format.BitsPerPixel / 8.

Now let’s create two bitmaps mapped to this mapped file:

this.bitmap = new System.Drawing.Bitmap(
              width, 
              height,
              rowWidth,
              System.Drawing.Imaging.PixelFormat.Format32bppPArgb,
              this.mapView);

this.image = (System.Windows.Interop.InteropBitmap)
  System.Windows.Interop.Imaging.CreateBitmapSourceFromMemorySection(
                                         fileMapping, 
                                         width, 
                                         height, 
                                         format, 
                                         rowWidth, 
                                         0);

Now you can in OnRender you can use following code:

protected override void OnRender(DrawingContext dc)
{
   // Ensure that bitmap is initialized and has correct size
   // Recreate bitmap ONLY when size is changed
   InitializeBitmap((int)this.width, (int)this.height);

   //TODO: Put here your drawing code

   // Invalidate and draw bitmap on WPF DrawingContext
   this.image.Invalidate();
   dc.DrawImage(
           this.image, 
           new Rect(0, 0, this.bitmap.Width, this.bitmap.Height));
}

Please note than this.image should be of type System.Windows.Interop.InteropBitmap to call Invalidate method. And don’t forget to call UnmapViewOfFile and CloseHandle.

WPF vs. GDI+

The problem

In one of our projects in Tesseris Pro we need to draw huge table of results of some physical experiment. The table can be 100×100 and can be 1000×1000. The application is usual Windows desktop application built with WPF. And as usual we’ve tried to use some 3rd party grid. And as you can imagine, we’ve got unacceptable performance even with visualization. The most problematic thing is that user needs to zoom-out the table to see every cell as single pixel in this case visualization gives us nothing.

Solution #1

One of the first idea was to create our own control derived from FrameworkElement or UIElement and implement drawing of the table inside overridden OnRender method. As we expected this should give us maximum performance. And keeping in mind that WPF is based on DirectX and we will get performance as in 3D games I’ve started implementation of proof of concept with following OnRender:

protected override void OnRender(DrawingContext dc)
{
    Debug.WriteLine("OnRender...");

    for (int i = 0; i < rows; i++)
    {
        for (int j = 0; j < columns; j++)
        {
            int x = width * i;
            int y = height * j;

            // Draw cell rect
            dc.DrawRectangle(
                      Brushes.Green, 
                      pen, 
                      new Rect(x, y, width, height));

            // Draw some text in cell
            dc.DrawText(
                new FormattedText(string.Format("{0},{1}", i, j),
                    CultureInfo.InvariantCulture,
                    FlowDirection.LeftToRight,
                    typeface,
                    10,
                    Brushes.Black),
                new Point(x, y));
        }
    }

    Debug.WriteLine("OnRender finish");
}

But performance was still unacceptable even for table of 100×100 cells. UI was refreshed in about 10 seconds. And when I measured time elapsed by OnRender I have got strange result 800ms. UI stuck for 10 seconds but OnRender takes only 800ms. I’ve got this results because WPF never draw everything immediately inside OnRender. With dc.Draw*** you just told infrastructure to draw something. Than WPF draws all required things in some other moment. So real drawing of 100×100 table requires about 10 seconds.

Solution #2

After fail with first solution I’ve tried to get DirectDraw surface and draw everything by myself. And it is not so easy with WPF. I have not found any built functionality for this. In blogs I found that the only way to use DirectDraw is to call it through COM interop. Some nightmare for as for me!
After that I’ve decided to check GDI+ (System.Drawing namespace) and try to draw the table with System.Drawing.Bitmap and than just draw bitmap with WPF:

protected override void OnRender(DrawingContext dc)
{
    Debug.WriteLine("OnRender...");
    using (var bmp = new System.Drawing.Bitmap(
                           columns * width, 
                           rows * height))
    {
        using (var g = System.Drawing.Graphics.FromImage(bmp))
        {
            for (int i = 0; i < rows; i++)
            {
                for (int j = 0; j < columns; j++)
                {
                    int x = width * i;
                    int y = height * j;

                    g.FillRectangle(
                                 System.Drawing.Brushes.Green, 
                                 x, 
                                 y, 
                                 width, 
                                 height);

                    g.DrawRectangle(
                                System.Drawing.Pens.DarkGray, 
                                x, 
                                y, 
                                width, 
                                height);

                    g.DrawString(
                                string.Format("{0},{1}", i, j), 
                                font, 
                                System.Drawing.Brushes.Black, 
                                new System.Drawing.PointF(x, y));
                }
            }
        }

        // Create Image from bitmap and draw it
        var options = BitmapSizeOptions.FromEmptyOptions();
        var img = Imaging.CreateBitmapSourceFromHBitmap(
                                    bmp.GetHbitmap(), 
                                    IntPtr.Zero, 
                                    Int32Rect.Empty, 
                                    options);

        dc.DrawImage(img, new Rect(0, 0, bmp.Width, bmp.Height));
    }
    Debug.WriteLine("OnRender finish");
}

When I started the app I decided that something is going wrong. UI was refreshed in less then one second. More that 10 times faster that with WPF drawing!

Conclusion

WPF gives us complex layout and device independence and other sweet things. But old GDI+ sometimes give much more – performance and simplicity!

Что стоит за async/await и почему опытным разработчикам надо быть осторожными

1. Базовые возможности Task Parallel Library (TPL)

Все возможности TPL базируются на старых Thread и ThreadPool, если точнее, то асинхронное выполнение задач будет производится путем вполнения их через класс ThreadPool. И фактический самый простой способ запустить асинхронную задачу при помощи новых инструментов не слишком отличается от ThreadPool, и выглядит так:

Parallel.Invoke(() => DoSomeWork(), () => DoSomeOtherWork());

Здесь два лямбда-выражения которые могут быть выполнены параллельно. Однако, как именно это будет происходить и как будет осуществятся синхронизация задач, определяется более новыми механизмами, но, обо всем по порядку.

Основным элементом модели параллельных вычислений является класс Task. Этот класс инкапсулирует в себе некую вычислительную задачу. Никакой “арифметики” задач TPL не предлагает. Другими словами, вы не можете из двух задач собрать цепочку последовательных, или напор параллельных. Разработчики с опытом в TPL скажут, а как же метод ContinueWith? Но дело в том, что этот метод не имеет перегрузки работающей с тасками, он всего лишь позволяет дополнить существующий делегатом. Все равно что дописать в конец кода таска еще одну или несколько строк.

Так же есть статические методы Task.WhenAny и Task.WhenAll, которые создают задачу, завершающуюся после завершения всех или любой из заданных в параметрах задач. Но опять же они не позволяют без дополнительных усилий строить цепочки задач. За это приверженцы Reactive Extensions (Rx) очень ругают TPL.

Использовать Task можно следующим нехитрым способом:

Task task = new Task(() => Console.WriteLine("Hello from taskA."));
task.Start();
// Some code here
task.Wait();
// Some code that should wait before task finish.

Этот же код можно заменить на аналогичный, где метод Task.Run создает и запускает задачу :

Task task = Task.Run(() =&gt; Console.WriteLine("Hello from taskA."));
// Some code here
task.Wait();
// Some code that should wait before task finish.

В описании класса вы найдете функции для ожидания нескольких задач Task.WaitAll и Task.WaitAny. На этом этапе все вроде бы проcто и понятно. Но продолжим наши исследования дальше.

Рассмотрим новый синтаксис C# с await и async. Эти два ключевых слова на самом деле просто облегчают работу с классом Task избавляя разработчика от написания дополнительных методов и другого служебного кода. Для того, чтобы можно было воспользоваться await метод который мы “ждем” должен возвращать Task а метод, в котором мы “ждем” должен быть помечен как async.

Рассмотрим следующий код консольного приложения (Debug используется потому что дальше мы будем этот код запускать в других типах приложений)

static void Main(string[] args)
{
   Debug.WriteLine("Main started in thread {0}", Thread.CurrentThread.ManagedThreadId);
   var task = Task.Run(() =>
   {
      Debug.WriteLine("Task1 started in thread {0}", Thread.CurrentThread.ManagedThreadId);
      Thread.SpinWait(5000000);
      Debug.WriteLine("Task1 finished in thread {0}", Thread.CurrentThread.ManagedThreadId);
   });
   task.Wait();
   Debug.WriteLine("Main finished in thread {0}", Thread.CurrentThread.ManagedThreadId);
}

Результат его работы достаточно ожидаем:

Main started in thread 8
Task1 started in thread 9
Task1 finished in thread 9
Main finished in thread 8

Главный метод запустился в 8-м потоке затем асинхронная задача в 9-м потоке затем завершился 8-й поток.

Перепишем этот код с использованием await и async. Нам придется создать дополнительный метод static async void Run() потому что точку входа помечать async нельзя. Вызов этого метода имеет обычный вид Run();

static void Main(string[] args)
{
   Run();
}
 
static async void Run()
{
   Debug.WriteLine("Main started in thread {0}", Thread.CurrentThread.ManagedThreadId);
   await Task.Run(() =>
   {
      Debug.WriteLine("Task1 started in thread {0}", Thread.CurrentThread.ManagedThreadId);
      Thread.SpinWait(5000000);
      Debug.WriteLine("Task1 finished in thread {0}", Thread.CurrentThread.ManagedThreadId);
   });
 
   Debug.WriteLine("Main finished in thread {0}", Thread.CurrentThread.ManagedThreadId);
}

На первый взгляд все то же самое и те, кто уже немного поработал с await и async, могут подумать, что результат будет таким же. Но результат такой:

Main started in thread 9
Task1 started in thread 10

Куда же делось завершение обоих потоков спросите вы. Для того чтоб разобраться давайте посмотрим на получившуюся сборку при помощи Reflector или ILSpy, отключив при этом декомпиляцию в await и async. Мы увидим, что метод стал выглядеть странно:

private static void Run()
{
   Program.<Run>d__2 <Run>d__;
   <Run>d__.<>t__builder = AsyncVoidMethodBuilder.Create();
   <Run>d__.<>1__state = -1;
   AsyncVoidMethodBuilder <>t__builder = <Run>d__.<>t__builder;
   <>t__builder.Start<Program.<Run>d__2>(ref <Run>d__);
}

И появилась структура private struct <Run>d__2 : IAsyncStateMachine с большим количеством кода в котором попадаются наши строки Debug.WriteLine. Я не буду приводить здесь полный код кому интересно сможете проделать процедуру сами. Смысл этого кода в том, что исходный метод Run был разобран на части и построен некоторый автомат, выполняющий эти части в разных потоках.

Но почему все-таки мы не увидели двух строк с завершением потоков. Ответ на это даст модифицированный код примера. Добавим задержку после вызова Run:

static void Main(string[] args)
{
   Run();
   Thread.SpinWait(10000000);
}
Main started in thread 9
Task1 started in thread 6
Task1 finished in thread 6
Main finished in thread 6

Последняя строка на первый взгляд выглядит более чем странно – метод Main начался в 9-м потоке, а закончился в 6-м. В том же, что и задача “Task1”.

Давайте теперь выполним все то же в WPF приложении, которое имеет единственную кнопку и обработчик нажатия на эту кнопку Button_Click:

private async void Button_Click(object sender, RoutedEventArgs e)
{
   Debug.WriteLine("Main started in thread {0}", Thread.CurrentThread.ManagedThreadId);
   await Task.Run(() =>
   {
      Debug.WriteLine("Task1 started in thread {0}", Thread.CurrentThread.ManagedThreadId);
      Thread.SpinWait(5000000);
      Debug.WriteLine("Task1 finished in thread {0}", Thread.CurrentThread.ManagedThreadId);
   });
   Debug.WriteLine("Main finished in thread {0}", Thread.CurrentThread.ManagedThreadId);
}

Результат будет таким:

Main started in thread 9
Task1 started in thread 6
Task1 finished in thread 6
Main finished in thread 9

Здесь главный метод, почему то, начался и закончился в том же потоке, в 9-м. Если открыть сгенерированную сборку в ILSpy то код будет примерно такой же, не считая инициализации переменных, отвечающих за параметры sender и e имя “главного” метода и содержащего его класса. Выходит, “самые страшные опасения не оправдались код генерируется одинаковый и от типа приложения не зависит. Что же все-таки отличается?

2. SyhchronizationContext и TPL

Причиной разного поведения консольного и WPF приложения были разные контексты синхронизации, разные реализации базового класса SynchronizationContext, заданные в качестве текущего контекста синхронизации. Добавив в оба приложения строку

Debug.WriteLine(SynchronizationContext.Current.GetType().FullName);

видим следующий результат:

– для WPF: System.Windows.Threading.DispatcherSynchronizationContext
– для консоли ничего не видим потому, что SynchronizationContext.Current равен null

При помощи ILSpy можно увидеть, что метод AsyncVoidMethodBuilder.Create(), вызов которого добавлен компилятором в метод Run внутри обращается к контексту синхронизации:

public static AsyncVoidMethodBuilder Create()
{
    return new AsyncVoidMethodBuilder(SynchronizationContext.CurrentNoFlow);
}

Думаю, тут можно остановиться и не копать глубже в код TPL. Таким образом результаты выполнения задач мы будем получать через контекст синхронизации и в WPF мы будем через очередь обработки сообщений попадать в UI поток, а в консольном приложении будет использоваться произвольный другой поток.

Но и это еще не все, есть еще TaskScheduler  который тоже можно поменять и который будет влиять на планирование задач.

Так что начиная использовать async и await постарайтесь забыть все то, что раньше знали о потоках и оперируйте задачами.

На мой взгляд конструкция очень удобная, не нужно думать о синхронизации перенаправлении в поток интерфейса, но с другой стороны столько мест где можно ошибиться, полагаясь на привычные знания о потоках и привычную модель много поточности.