Author Archives: Dmitry Peleshenko

PhD in IT, Owner at Tesseris Pro LLC (http://www.tesseris.com)

JavaScript. Ugly language from past becomes straight forward technology of the future.

It’s not news anymore that JavaScript is the most cross-platform and cross-tier technology today. Let’s take a look on it. JavaScript can be efficiently used on server part of Web-server, can be used to create modern micro-services. JavaScript is the only technology for now that allows to build mobile applications for ALL mobile platforms with 100% the same code of UI and logic for every platform with Apache Cordova technology or with React Native from Facebook. JavaScript allows to build cross platform desktop applications and even more that desktop applications can be easy converted to mobile. And of course all web-browsers support JavaScript. Even some micro-controllers start to support JavaScript. Really, JavaScript is everywhere now. And it is highly efficient everywhere.

But why? Why JavaScript is so popular taking into account that there is no classic OOP support and no multi-threading support in JavaScript? It is a kind of magic when limitations become benefits. Because of limited functionality developers have to build “plugins” on another languages (for example native modules for Node.js or native modules for Cordova) and this plugins are simple units that solve simple task. JavaScript limitations enforce developers to build small simple units that do one thing but do it well. And these units are reusable. Dreams come true!

From other point of view simplicity of JavaScript keeps developers away from over-engineering.

So what is JavaScript today? It is a language to write application logic in the most simple and efficient way without mixing it with technical details.

How to copy required node modules to target dir with gulp?

One of the common tasks related to node.js development is building production ready package. Sometimes you just need to clean and install production only scripts but sometimes you need something more complicated like preparing files for electron package. And one of important part is to copy correct node modules to target locations. By correct node modules I mean production dependencies without dev dependencies. Taking into account complicated algorithm of node modules installation it may be not trivial task.

I have tried several approaches like reading package.json and extracting list of dependencies, but in that case you can’t just copy modules listed in dependencies, you have to copy dependencies recursively according to algorithm. As for me the most optimal solution is to give npm to deal with all modules by itself. The main idea is to install node modules with –production parameter to some cache folder and then copy modules with every build to target folder. To simplify call of npm install I used gulp-install, but you can do everything with gulp-exec.

Here is my tasks configuration:

const gulp = require("gulp");
const ts = require("gulp-typescript");
const install = require("gulp-install");
// I need other info from package.json so let's load it as object
const package = require("./package.json");

// Copy node modules from cache with cache refresh
gulp.task("node_modules", ["node_modules_cache"], () => {
    gulp.src("./build/modules/node_modules/**/*.*", {base: "./build/modules/node_modules/"})
        .pipe(gulp.dest("./build/debug/resources/app/node_modules"));
});

gulp.task("node_modules_cache", () => {
    // Ensure directory exists
    if(!fs.existsSync("./build/modules")){
        fs.mkdirSync("./build/modules");
    }

    // You can replace following by just copy package.json, but I have already loaded it so let's just save
    fs.writeFileSync("./build/modules/package.json", JSON.stringify(package));

    // Make npm install in cache location
    return gulp.src("./build/modules/package.json")
               .pipe(install({production: true}));
});

I’m trying to install scripts content and node_modules to ./build/debug/resources/app/ because I’m assembling electron app at ./build/debug/. After that I’m calling gulp-electron to prepare final application and even change icon of electron.exe file but that is another story…

Node.js C/C++ module is actually simple

Node.js is one of most unexpected technology for me JavaScript on server side was unbelievable thing for me 5 years ago. And to be honest JavaScript was not very straight forward technology with main goal to handle HTML/CSS based UI. But now we have several successful Node.js projects in Tesseris Pro and is seems like everything will be done with JavaScript soon. And JavaScript itself become much more serious and straight forward language. In my last post I described possible ways to run asynchronous operation in Electron.

Another problem related to that electron project was problem of creation of C/C++ module for Image Magic library. There was several modules in npm some of them was just CLI wrappers some of them was wrappers on C++ API. Both of them seems to be a wrappers created by somebody to solve their exact problem and do not solve my problems in addition CLI wrappers are slow. Thus I decided to create one more limited wrapper just for my needs – imagemagik2 hope one day I will be able to make it more or les full featured. But let me describe my experience with C/C++ Node.JS module creation…

You can find source code here: https://github.com/TesserisPro/imagmagick2

What C/C++ Node.JS modules are?

Node.JS native module is a DLL (or equivalent for other OS) file that contains some code that interacts with Node.js through V8 API. This file is renamed to *.node by specific build procedure. You will be able to manipulate JavaScript abstraction representation – crete variables, functions, object control their parameters, etc. inside your C/C++ module.

Bad news:
– V8 API is extremely over-complicated and you should be experienced C/C++ developer to use it.
– Node.js introduce additional abstraction layer Native Abstractions for Node.js (NAN) to simplify module programming that has very poor documentation, but you have to use it because without that abstraction your module may be not compatible with old or new versions of Node.js
– You have to recompile your module for every exact version of Node.js and for every platform of course. If you will try to use module compiled for another version of node even if it will have only minor changes and you are on the same platform you will see error during loading this module like “Module version mismatch. Expected 48, got 47”.

Interesting news:
– Module building tool (node-gyp) enforces you to build cross-platform module
– You will need python to build your extension. It is not a problem actually but it’s funny 🙂

Good news:
– It is not so complex as you thought first 🙂

Hello world module

You can find simple module sample in Node.js documentation.

Module startup

Seems to be obsolete

On Node.js web page you can find that entry point of your module is void Init(Local exports, Local module) function and module registration should be done with NODE_MODULE(addon_name, Init). To add module content you can just register all parts of your module by adding them to exports parameter. Another option is to overwrite whole exports by function or anything you want. Exactly the same idea as in usual JavaScript modules module.exports = {...}.

The type Local is actually object reference managed by the v8 garbage collector. According to V8 API documentation there are two types of handles: local and persistent handles. Local handles are light-weight and transient and typically used in local operations. They are managed by HandleScopes. Persistent handles can be used when storing objects across several independent operations and have to be explicitly deallocated when they’re no longer used.

Seems to be actual

According to NAN documentation entry point of are NAN macros NAN_MODULE_INIT(init){ } and NODE_MODULE(your_module_name, init) where init is just an identifier and can be changed.

To add something to your exports you can use NAN macro NAN_EXPORT(target, your_object) the most confusing part here is target you never define it. It is just naming convention defined in nan.h and node.h

// nan.h
#define NAN_MODULE_INIT(name) void name(Nan::ADDON_REGISTER_FUNCTION_ARGS_TYPE target)

// node.h
#define NODE_MODULE(modname, regfunc) NODE_MODULE_X(modname, regfunc, NULL, 0)

The NAN is full of macros that defines hidden variable and other C language objects. That makes it very hard to understand.

Full code of module startup with registration of method looks like following

#include "std.h"

NAN_METHOD(my_method) {
    // Write code here;    
}

NAN_MODULE_INIT(init) {
    NAN_EXPORT(target, my_method);
}

NODE_MODULE(my_module, init);

Creating a function

To add some functionality to our module you can create a function and register it. And here we have hidden identified again. Here is marco from nan.h

<br />#define NAN_METHOD(name)  Nan::NAN_METHOD_RETURN_TYPE name(Nan::NAN_METHOD_ARGS_TYPE info)

You can use info argument to read function arguments in following way

    double myNumberValue = info[0].As<v8::Number>()->Value(); // First argument
    Nan::Utf8String myStringValue(info[1].As<v8::String>()); // Second argument
    char *actualString = *myStringValue; //Way to access string

As you can see here we have some NAN wrappers again, and this time Nan::Utf8String is useful thing it saves several lines of code related to V8 string implementation.

To send result of your calculations back to JavaScript world you can set return value by following code

// Create new V8 object
v8::Local<v8::Object> result = Nan::New<v8::Object>();

// Set some object fields
Nan::Set(result, v8::String::NewFromUtf8(Nan::GetCurrentContext()->GetIsolate(), "my_field_name"), Nan::New((int)my_filed_value));

// Set object as return value
info.GetReturnValue().Set(result);

Here you can find v8::String::NewFromUtf8, unfortunately I did not found a way to create string with NAN, so I have to do it with V8 API. Also good point here is Nan::GetCurrentContext()-&gt;GetIsolate() that method returns object of type Isolate* that object is required for most V8 API calls and represent something like V8 managed heap – a space where all variables live and die with GC.

Using async workers

In most cases you want to create asynchronous functions to not block node.js main tread. You can use general C/C++ threads management but V8 and Node.js are not thread safe and if you call info.GetReturnValue().Set(result) from wrong thread you can damage data and you will get exception for sure.

NAN introduce Nan::AsyncWorker a class with several virtual methods that should be overridden to crate async operation and simplify dispatching results back from another thread. Most important are HandleOKCallback, HandleErrorCallback and Execute methods. Execute method is running in separate thread and perform asynchronous operation. HandleOKCallback is called in case Execute finished without problems. HandleErrorCallback will be called in case of error in Execute method. Thus to implement asynchronous operation with callback you can inherit Nan::AsyncWorker class and override virtual methods in following way.

class Worker : public Nan::AsyncWorker {
    public:
        Worker(Nan::Callback *callback) : AsyncWorker(callback) 
        {

        }

        void Execute() 
        {
            if (do_my_asyc_action() != MY_SUCCESS_VALUE) 
            {
                this->SetErrorMessage("Error!!!");
            }
        }
    protected:
        void HandleOKCallback()
        {
            v8::Local<v8::Value> argv[] = { 
                v8::String::NewFromUtf8(Nan::GetCurrentContext()->GetIsolate(), "Some result string")
            };

            // Call callback function
            this->callback->Call(
                1,     // Number of arguments
                argv); // Array of arguments
        }

        void HandleErrorCallback()
        {
            v8::Local<v8::Value> argv[] = { 
                v8::String::NewFromUtf8(Nan::GetCurrentContext()->GetIsolate(), this->ErrorMessage()) 
            };

            // Call callback function with error
            this->callback->Call(1, argv);
        }
};

Building your module

The build system configuration is more or less simple. You should create a JSON file named binding.gyp and add your source files and other options to this json file. The build will be always just compilation of your C/C++ files. The node-gyp will automatically prepare building configurations for every platform during module installation. On Windows it will create solution/project files and build your module with Visual Studio, on linux it will prepare makefile and build everything with gcc. Bellow you can find one of the simplest binding.gyp file.

{
  "targets": [{
                "target_name": 'my_module',
                "sources": [ "main.cpp" ],
             }]
}

Additionally you can configure specific options for specific platforms. You can find more about node-gyp here https://github.com/nodejs/node-gyp.

To build your module automatically during installation add following script section to package.json

"scripts": {
    "install": "node-gyp rebuild"
}

To build your module during development you can use node-gyp command with build/rebuild parameter.

Conclusion

The V8 and NAN API are complicated and have not very detailed documentation thus my idea is to keep C/C++ module as simple as possible and use only methods and async workers without creation of complex JavaScript structures through API. This will allow to avoid memory leaks (in some cases it is very hard to understand how and when you should free memory from docs) and other problems. You can add complicated JavaScript wrapper on your simplified async methods inside your module and create rich and easy to use API. I used this approach in my module here https://github.com/TesserisPro/imagmagick2

Electron. Asynchrony, Modules and C/C++

There are a lot of posts about NodeJS describing innovative solutions about JavaScript on server side. However there is another place where JavaScript, V8 and Chrome HTML engine can be applied. That place is your desktop and Electron is a technology that makes possible to create desktop applications with JavaScript/HTML. I’m sure most of us already have experience in Electron. I’m sure that everybody who is reading have at least one Electron based application on this desktop. And some of that applications are really awesome. For example Visual Studio Code, Slack or Atom the beginning of Electron platform. So I’m a little late with this post, but let me share some of my problems and way to solve them.

What is Electron?

According to documentation Electron is NodeJS and Chromium tunning on the same V8 engine. As result we have something like possibility to run node modules in browser. That will allow to access local file system or use any C/C++ modules.
Any Electron application has at least two processes, one main process and one or more renderer. The main process is pure NodeJS process and entry point of the application. Main process is responsible of creation of windows and every window has it’s own renderer process.

In renderer process you are inside Chrome browser but with power of NodeJS.

You can easy communicate between processes with IPC. Electron gives you very simple API for this.

I think that is more that enough about Electron to have general picture of it. You can try it with this tutorial http://electron.atom.io/docs/tutorial/quick-start/.

Asynchronous operations

It is very hard to imagine modern desktop operation without asynchronous operations. We need to show progress while loading something, we need to perform background calculations and many many more features like these. However Electron is not so good in asynchronous operations.

One of the first idea to do something in async way is to send IPC message to main window and wait for results. You may not believe me if I will say that this may freeze your UI but consider following sample

https://github.com/peleshenko/freezing_node_sample

Clone it run npm install and execute run.sh.

When we are trying to freeze UI with long running loop you can not type in input field but css animation is working and you can close window. But when we are trying to freeze application with long running loop in main process even css animation is freezed and you can not close the window.

Amazing!

You can review corresponding discussion on Electron page on github https://github.com/electron/electron/issues/3363

Short answer:

... a blocking operation in main process is easy to block all other processes... ...it is required to put IO and CPU-bound operations in new threads to avoid this...

So the first lesson learn – forget about main process, live it alone, this process is for Chromium not for you, use your own.

Next idea is to use WebWorkers. Electron supports WebWorkers. But unfortunately in WebWorkers you have no access to NodeJS. You can use usual browser API, or calculate something but no NodeJS. That fact makes WebWorkers useless in Electron because most operations that requires async execution are related to file system or other external things.

So there are only one way start something in parallel thread using JavaScript is child_process.fork. Not very simple but working way.

Alternatively you can create a C/C++ addons and do anything you want inside this addon with C++, start any number of threads and dispatch results to Electron’s message loop with callback. But that is another story…

C/C++ Addons

As you can see addons is not something unusual even simple async operation may require addon creation. Additional reason that may enforce you to implement your own addon with C/C++ is quality of existing addons in npm. Some of them are really terrible.

After my last word I think you recall your last attempt of npm module installation in Windows. You had to install specific version of VisualStudio and Python 2.7. My first reaction was “What? The module author is crazy he is going to mix C++, Python and NodeJS. Give me another module…”. But all modules that has C/C++ code inside will require Python and VisualStudio and some times specific version of VisualStudio.

You need all this software to install module because of NodeJS build system node-gyp. That is the only way to include C/C++ addon to your npm module. That is strange thing but if you will take a look a little deeper you will understand the reason. But let me leave this topic for another post.

OK finally you installed VisualStudio and Python and installed module. Starting your Electron application and see following in dev tools console (numbers may differ):

Module version mismatch. Expected 48, got 47"

The source of that problem is that node addon is only compatible with exactly the same node version that this addon has been built for. And you just have built is for your NodeJS but Electron as a little different NodeJS version inside. Doh!!!

To solve the problem you can use electron-rebuild. Follow the instruction and you will be able to rebuild required module for NodeJS version used in Electron.

And the last…

That is it about Electron and problems with it in general. We have more and more projects with Electron in Tesseris Pro. So I hope to share my experience in C/C++ addons and React in Electron in next posts.

And once more. Electron is Chromium + NodeJS, thus it already has NodeJS module system and that system is not so bad. And as for me it looks very strange when people tries to use system.js or require.js or any other JavaScript module system to load modules in Electron. Electron is not a browser! It is NodeJS + browser! Use it as NodeJS!

Some python fun for presentations

The post contains ultimate off-topic, so source code is here 🙂 : https://bitbucket.org/dpeleshenko/mdshow

Except managing managing my company Tesseris Pro and working as a developer I am working with student in one of Kharkiv university. That is a good way to find young talents and prepare them to work in our company.

So I am very often preparing some technical presentations. I have a huge archive of presentations form different technologies. When I prepare my lectures I have to merge this presentations update obsolete data and so on. I was always using MS PowerPoint. But reworking presentation by drag’n’drop is terrible. Additionally I have problems with different slide design. So I am completely unsatisfied with existing mouse driven presentation software.

As a developer I prefer to write everything problem oriented language. First idea was HTML… but it’s too complex… too many letters. LaTex, god but too complex too. I have no math symbols and other similar things in my presentation. Much better is Markdown. After that presentation done with markdown become my dream. And one of important thing was to have every slide in separate file and have separate file for general presentation layout and design. There are some open source solution but none of them was good enough for me. So I decided to make it myself.

My first idea was to convert markdown to HTML and render HTML with CEF. And taking into account that I have another dream – to develop cross platform desktop applications technology with Python business logic and HTML/CSS/JS presentation. I selected python and CEF as main technology stack. But unfortunately all CEF binding for python are terrible. I spent whole day trying to run demo code without any success.

After some additional research I have found very good library to statically render HTML/CSS – http://weasyprint.org/ That was completely enough for me. You can find a python script with about 120 lines to show presentations based on directory with markdown files here: https://bitbucket.org/dpeleshenko/mdshow

To run this code you will need to install following:

I have tested everything in Ubuntu 15.10 but everything should work in any Linux, Mac or Windows. Maybe some fine-tuning with GTK will be required.

Hello World in Visual Studio Code on Linux

This post adds a sample as addition to my post about VS Code previous post

Setup

  1. Install latest mono as described at http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives.

  2. Install Visual Studio Code from https://code.visualstudio.com Just unpack and start.

Simple Scenario (no debug and no IntelliSense)

Code

./program.cs

public static class Programm
{
    public static void Main()
    {
        Console.WriteLine("Hello Mono and VS Code!!!");
    }    
}

Project configuration

For simple applications without debug support you can skip creation of project.json or any other file like this.

Build configuration

./.vscode/tasks.json

In following task configuration we use msc (mono C# compiler) as build tool with our single code file as an argument and msCompile problem matcher.

{
    "version": "0.1.0",
    "command": "mcs",
    "isShellCommand": true,
    "showOutput": "silent",
    "args": ["program.cs"],
    "problemMatcher": "$msCompile"
}

Launch configuration

Just press F5 and VS Code will auto-generate launch.json that we need.

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch",
            "type": "mono",
            "request": "launch",
            "program": "${workspaceRoot}/program.exe",
            "args": [],
            "cwd": "${workspaceRoot}",
            "runtimeExecutable": null,
            "env": {},
            "externalConsole": false
        },
        {
            "name": "Attach",
            "type": "mono",
            "request": "attach",
            "address": "localhost",
            "port": 5858
        }
    ]
}

And that’s it

  • Ctrl+Shift+B to build
  • F5 to run and see output in debug console

VS Code Debug Console

Complete scenario

If you need to add debugger and IntelliSense support to simple project described above just add project.json

Additional setup

To use project.json we need to install DNX as fas as project.json is part of dnx build system. Run following commands to install DNX for mono:

curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
dnvm upgrade -r mono

./project.json

Below is a simple project.json file that includes all *.cs file in all subdirectories and use dnx451 as framework. Taking into account that we have configured DNX to use mono dnx451 means mono in our case.

{
    "configurations": {
        "Debug": {
            "compilationOptions": {
                "define": ["DEBUG", "TRACE"]
            }
        },
        "Release": {
            "compilationOptions": {
                "define": ["RELEASE", "TRACE"],
                "optimize": true
            }
        }
    },
    "frameworks": {
        "dnx451": {
            "frameworkAssemblies": {
                "System": ""
            }
        }
    },
    "dependencies": {
    },
    "compile": "*/**/*.cs" 
}

After that you can navigate in code use IntelliSense but you still not able to debug your program because mcs does not produce *.mdb by default. To fix this problem just –debug to mcs arguments in tasks.json

./taks.json

{
    "version": "0.1.0",
    "command": "mcs",
    "isShellCommand": true,
    "showOutput": "silent",
    "args": ["program.cs","--debug"],
    "problemMatcher": "$msCompile"
}

Now you can work with all functionalities of VS Code. Just press F5 and start debugging!

VS Code Debugger

Complex projects

For complex projects just use your favorite build tool in tasks.json (see [prev post] for more details about tasks.json)

Some useful links

Mono Project

Visual Studio Code

DNX

Project File Description

global.json

Schema for tasks.json

task.json description

Debugging in Visual Studio Code

Version Control in Visual Studio Code

Visual Studio Code on Linux

Microsoft declares that new version of .Net and new alternative dev tool Visual Studio Code will be available for multiple platforms, including Linux.

In this post I will try to describe my Visual Studio Code usage experience. I will not describe .Net Code or DNX or Mono in details and focus on Visual Studio. I will use mono because .Net Core/DNX currently is incomplete and debugging using it is highly complicated under Linux. So I decided to use mono for now and switch to newer technologies later. I’m currently using Ubuntu 15.10 but all described things should work in the same way in 14.04 and any other Debian based on Linux.

First of all, you will need to setup latest mono version. In Canonical repository you always will find old version, not sure why maybe some stability consideration but I’m not sure that Canonical guys test mono 🙂

Any way to install latest mono go to http://www.mono-project.com/docs/getting-started/install/linux/#debian-ubuntu-and-derivatives and follow the instructions about how to add mono repository and install latest version.

Next let us install Visual Studio Code. You can download latest version here https://code.visualstudio.com/ Downloaded file is just archive with application no installation process required just unpack and start.

Also for some VS Code functionality you will need DNX. Install it according to http://docs.asp.net/en/latest/getting-started/installing-on-linux.html or run following commands to install DNX for mono:

curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh
dnvm upgrade -r mono

Projects

There is no projects and solutions as it was in usual Visual Studio. General idea of visual studio code is that project folder contains all project related files and only project related files. Also all project related files are programs on some human readable language (C#, JS, JSON, etc.) no more magic files with magic GUIDs. Thus you can only open project folder not project file with VS code and configure you project with any text editor, you can merge project configuration with merge tool, you can parse project configuration with some automation tools or do any other task based on documented and clear configuration files.

We still need project description files

If you open folder with code and with no project files in VisualStudio, you will be able to use VS Code as smart text editor and nothing more. However any modern IDE should have code suggestions, code navigation, in-place error highlighting, debugging and so on. Be sure VisualStudio Code supports this features and support them on higher level than VS 2015 Community Edition. But to enable all these features you have to explain VS some details about your code – create project file.

What files can be used to configure project?

Old project and solution files

VS Code supports *.sln and project files. You cannot open solution file but code parsing services will be able to locate and read solution/project files when you open solution folder.

./**/project.json

File named project.json is the main project configuration file. You can have several subprojects in your project and configure every project separately with project.json, in case of .net every project file will produce an assembly. See example below.

   "configurations": {
        "Debug": {
            "compilationOptions": {
                "define": ["DEBUG", "TRACE"]
            }
        },
        "Release": {
            "compilationOptions": {
                "define": ["RELEASE", "TRACE"],
                "optimize": true
            }
        }
    },
    "frameworks": {
        "dnx451": {
            "frameworkAssemblies": {
                "System": "",
                "System.Runtime": ""
            }
        }
    },
    "dependencies": {
        "Newtonsoft.Json": "8.0"
    },
    "compile": "*/**/*.cs" 
}

This configuration file describes two configurations Debug and Release with different optimization settings and specific define directives, defines one framework – dnx451 with used framework assemblies and specifies required nuget packages in “dpendencies” section. Compile section says that project should include all *.cs files in all subdirectories (/**/ – means any subdir).

Please note that project.json file is a part of DNX build system and you have to install DNX to make it work.

Full specification of project file can be found here https://github.com/aspnet/Home/wiki/Project.json-file.

./global.json

If you have several projects, you can group them together and explain VS Code that all project.json should be treaded as parts of some solution with global.json file.

One of my global.json file looks like following:

 {
   "projects": [
    "Guardian.Common",
    "Guardian.Service",

    "Guardian.Module.BoilerMultiRoom",
    "Guardian.Module.RealtimeProvider",
    "Guardian.Module.Watering",
    "Guardian.Module.Update",
    "Guardian.Module.Video",

    "Guardian.Web.Common",
    "Guardian.Web"
    ]
 }

It just contains a list of all projects. You can find description of global.json file here http://docs.asp.net/en/latest/conceptual-overview/understanding-aspnet5-apps.html#the-global-json-file

./vscode/tasks.json

Here is an example of tasks.json file of one of my real project that have mono back-end and typescript/html/less front-end.

{
    "version": "0.1.0",
    "command": "gulp",
    "isShellCommand": true,
    "args": ["--no-color"],
    "tasks": [
        {
            "taskName": "default",
            "isBuildCommand": true,
            "showOutput": "silent",
            "problemMatcher": ["$tsc", "$lessCompile",
            {
                "owner": "cs",
                "fileLocation": "relative",
                "pattern": {
                    "regexp": "^\\S(.*)\\((\\d+),(\\d+)\\):.*(error|warning)(.*)$",
                    "file": 1,
                    "line": 2,
                    "column": 3,
                    "severity": 4,
                    "message": 5
                }
            },
            {
                "owner": "general",
                "fileLocation": "relative",
                "pattern": {
                    "regexp": "(error)(ed after)",
                    "file": 1,
                    "severity": 1,
                    "message": 1
                }
            }]
        },
        {
            "taskName": "publish",
            "showOutput": "always",
        }
    ]
}

As you can see, there are some global settings like command and args. Command in this file means what command should be executed to perform build actions. And yes, the command is global for all tasks. The command can be configured only once because with command you should specify build tool like msbuild, make or gulp in my case and every command is a target for build tools.
Actual command line will look like .
Default task in my sample has isBuildCommand=true this means that VS Code should use this to build my project. You can execute build task by Ctrl+Shift+B shortcut.
To execute other tasks you can press F1 and then type Run Task followed by Enter. This will list all available tasks. Select one and press Enter to execute the task.
To parse result of any task you can specify problemMatcher. Problem matcher is just a pattern to extract build errors, warning and any other messages. All extracted errors are shown as error list in VS Code and are shown in-place in your code as it was in usual VS. You can use one of existing problem matchers or define your own with regular expression pattern.

Some of available problem matchers

  • $msCompile – Microsoft compiles (C# or C++)
  • $lessCompile – Less files compiler
  • $tsc – TypeScript compiler
  • $gulp-tsc – TypeScript compiler implemented as gulp task

Some notes about single command for all tasks and build tools

At first I was thinking as experienced user of usual Visual Studio Enterprise where we have a build and a lot of other “crunches” that allow us to automate tasks – “WTF same command for all tasks???”. But later I noted that usually we have to write another “build-crunches” to execute all this “crunches” on build machine. In VS Code you are configuring build stages(targets) as tasks and should perform them thought your build tool. That is a kind of DRY principle applied to build scripts. Write any task and you will be able to use it build machine.

VS Code tasks are powerful tool that allows us to use any build system and integrate it with code editor.

For more information about tasks see following links.

https://code.visualstudio.com/docs/editor/tasks_appendix

https://code.visualstudio.com/Docs/editor/tasks

./vscode/launch.json

lunch.json describes how to execute end debug application when you press F5 key. Here is an example from one of my projects:

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Launch",
            "type": "mono",
            "request": "launch",
            "program": "./publish/Guardian.Service.exe",
            "args": [],
            "cwd": "./publish/",
            "env": {}
        },
        {
            "name": "Attach",
            "type": "mono",
            "request": "attach",
            "address": "localhost",
            "port": 5858
        }
    ]
}

Currently VS Code supports only two lunch configurations: “Launch” and “Attach” to start debugging and attach to already started process. You can specify type of application, currently only “mono” or “node” are supported under Linux and specify program to start or host/port to attach debugger to.

Unfortunately there is no way to debug .Net Core in VS Code under Linux now. Hope to see it in the nearest future. The most significant problem here is .pdb/.mdb files problem. VS Code for Linux supports mono code mapping files and .Net Core supports usual pdb files. Hope it is just a question of time as far as under Windows VS Code can debug .Net applications.

Ensure that project description is parsed correctly

When you open folder with just project.json file, VS Code automatically parses this file and enables functionality like suggestions and code navigation. In this case you will see “Running” status bar indicating that VS Code is parsing project file.

VS Code Is Parsing Project File

When project is parsed successfully you will see status like following.

VS Code Project File

By clicking on project name in status bar you can select project file manually.

In some cases VS Code will not be able to select project file. In such case it will show “Select project” green text in status bar and you will have to select project file manually.

When project file is parsed by VS you will be able to use functionality like code navigation, suggestions, etc.

VS Code Navigation

Built-in GIT support

VS Code has built-in git client. That allows you to perform simple git tasks like push/pull, rebase, commit, select files for commit, revert specific files and so on. More complex tasks like view history and merge conflicts look not very nice in current version of VS Code and are more likely you will use some external tools for these tasks.

VS Code Git Support

Read more at https://code.visualstudio.com/Docs/editor/versioncontrol

Other languages and technology support

Visual studio code supports a lot of languages except C# and support of some languages is even better that in VS Enterprise

Features Languages
Syntax coloring, bracket matching Batch, C++, Clojure, Coffee Script, Dockerfile, F#, Go, Jade, Java, HandleBars, Ini, Lua, Makefile, Objective-C, Perl, PowerShell, Python, R, Razor, Ruby, Rust, SQL, Visual Basic, XML
Snippets Groovy, Markdown, PHP, Swift
IntelliSense, linting, outline CSS, HTML, JavaScript, JSON, Less, Sass
Refactoring, find all references TypeScript, C#

Summary

  • You can use Visual Studio Code write, refactor and debug .Net/Mono (C#) code under any OS
  • Support of other languages makes VS Code highly efficient tool for mixed projects with TypeScript, Less and C# code for example.
  • Support of custom build tools adds more value to VS Code as tool for complex mixed projects
  • All project configurations are human readable JSON that can be easy maintained
  • VS Code has built in git support that solves 90% of tasks
  • Usage of VS Code requires another point of view on development – usage of easy to understand config files instead of wizards

See my next post for sample project.

Some useful links

Mono Project

Visual Studio Code

DNX

Project File Description

global.json

Schema for tasks.json

task.json description

Debugging in Visual Studio Code

Version Control in Visual Studio Code

DNX, .Net Core, ASP.Net vNext, who is who?

I’m writing this blog after we have done several projects (some of them were commercial, some internal) with these technologies at Tesseris Pro and discovered a lot of things that are not covered by documentation.

Let’s try to understand the place of every project on the global picture

Many of us already know about new version of .Net. There are a lot of resources saying that it will be open source, will run on Linux and OS X without mono and a lot of other things. And some statements can disappoint because they conflict with each other.

Let’s review available projects

At first let’s understand what DNX and .Net Core are and how they relate to each other.

  • DNX is a Dot Net Execution Environment. As ASP.Net vNext says it’s “…a software development kit (SDK) and runtime environment that has everything you need to build and run .NET applications for Windows, Mac and Linux … DNX was built for running cross-platform ASP.NET Web applications…”. And that’s right with dnu (Dnx utility) you can build projects.
  • .Net Core is a “… cross-platform implementation of .NET that is primarily being driven by ASP.NET 5 workloads… The main goal of the project is to create a modular, performant and cross-platform execution environment for modern applications.”(see .Net Core)

Hm… two projects from MS with the same goals and the same features. And yes that’s true DNX and .Net Core currently give us almost the same functionality. And these two sites together with ASP.Net and VS Code web site bring a lot of misunderstanding about what the next .Net next version is. What is the reason for it? The answer is here (https://github.com/dotnet/cli/blob/master/Documentation/intro-to-cli.md) “We’ve been using DNX for all .NET Core scenarios for nearly two years… ASP.NET 5 will transition to the new tools for RC2. This is already in progress. There will be a smooth transition from DNX to these new .NET Core components.” Looks like DNX will be replaced by tools from .Net Core.

Ok what about .Net Framework 4.6 and Mono? .Net Framework (https://www.microsoft.com/net) will continue its evolution as framework with WPF and other windows specific stuff and it will be compatible with .Net Core. It means that it will not duplicate core functionality but instead of it will offer additional services. And as it was before most interesting things like WPF will be only MS Windows compatible. The same story with mono I think.

Let’s summarize

.Net Core – set of cross-platform tools to build applications, cross-platform execution tools, set of cross-platform core libraries (like System, System.Runtime, System.IO, etc.)

DNXobsolete (at least for ASP .Net 5) set of cross platform tools and runtime environment almost the same feature-set as in .Net Core

.Net Framework – set of libraries to develop windows desktop and web application some of assemblies may be cross-platform as far as assembly format is the same in all described technologies.

Mono – set of libraries that partially replaces .Net Framework under Linux and OS X, execution tools and build tools.

Assembly format is the same so mono can execute Core or Framework assembly and vice versa. The most significant problem except P-Invokes to Win API is references. Currently all described frameworks have different functionality distribution across assemblies. So sometimes you will not be able to start application because application will search for class C in assembly A, but in actual runtime class C will be located in assembly B.

Some additional notes about build tools

Both .Net Core and DNX have new project file format project.json. It aimed to use file system structure as project structure and allows to build application for different platforms at the same time. As result you will have set of assemblies that are referencing correct assembly for every class.

Both tools work on Linux and OS X (OS X was not tested by me yet).
One of significant problems is debugging under Linux now. To debug application we need .pdb (.mdb) file that binds binary assembly with source code files. DNX tools are not able to produce any debug files, .Net Core tools can produce *.pdb files but VS Code and MonoDevelop need *.mdb under Linux to debug. So now it’s better to use mono under Linux if you would like to debug 🙂 Even if you are going to use VS Code.

One of important things is that .Net Core build tools can produce native small Linux executable to start application without “mono app.exe”.

My next blog will be about build tools and how to setup build environment under Linux.

WPF vs. GDI+ Some additional notes.

In one of my previous posts WPF vs. GDI+ I wrote about performance of WPF and how to solve this problem. After some experiments in Tesseris Pro we’ve found more improvements to solution described in previous post. The main idea is that when you are converting GDI bitmap to WPF bitmap it requires memory allocation and deceases performance. And fortunately there is solution that allows to map WPF bitmap to GDI bitmap so when we are drawing on one bitmap other bitmap is changing too, because they are located in the same memory.

First we will need some API calls. You will be able to read all description in MSDN but names of the functions are more than descriptive, if you know Win API of course 😉

[DllImport("kernel32.dll", SetLastError = true)]
static extern IntPtr CreateFileMapping(
                IntPtr hFile, 
                IntPtr lpFileMappingAttributes, 
                uint flProtect, 
                uint dwMaximumSizeHigh,
                uint dwMaximumSizeLow,
                string lpName);

[DllImport("kernel32.dll", SetLastError = true)]
static extern IntPtr MapViewOfFile(
                IntPtr hFileMappingObject,
                uint dwDesiredAccess,
                uint dwFileOffsetHigh,
                uint dwFileOffsetLow,
                uint dwNumberOfBytesToMap);

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool UnmapViewOfFile(IntPtr hFileMappingObject);

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool CloseHandle(IntPtr handle);

Than before creating bitmap let’s create memory mapped file as source of bitmaps:

var format = PixelFormats.Bgr32;

var pixelCount = (uint)(width * height * format.BitsPerPixel / 8);
var rowWidth = width * (format.BitsPerPixel / 8);

this.fileMapping = CreateFileMapping(
                       new IntPtr(-1), 
                       IntPtr.Zero, 
                       0x04, 
                       0, 
                       pixelCount, 
                       null);

this.mapView = MapViewOfFile(
                       fileMapping, 
                       0xF001F, 
                       0, 
                       0, 
                       pixelCount);

When we are calling CreateFileMapping with new IntPtr(-1) as first parameter windows doesn’t map some file to memory but will use system page file as source of mapping. And of course in this case we should specify size of file with width * height * format.BitsPerPixel / 8.

Now let’s create two bitmaps mapped to this mapped file:

this.bitmap = new System.Drawing.Bitmap(
              width, 
              height,
              rowWidth,
              System.Drawing.Imaging.PixelFormat.Format32bppPArgb,
              this.mapView);

this.image = (System.Windows.Interop.InteropBitmap)
  System.Windows.Interop.Imaging.CreateBitmapSourceFromMemorySection(
                                         fileMapping, 
                                         width, 
                                         height, 
                                         format, 
                                         rowWidth, 
                                         0);

Now you can in OnRender you can use following code:

protected override void OnRender(DrawingContext dc)
{
   // Ensure that bitmap is initialized and has correct size
   // Recreate bitmap ONLY when size is changed
   InitializeBitmap((int)this.width, (int)this.height);

   //TODO: Put here your drawing code

   // Invalidate and draw bitmap on WPF DrawingContext
   this.image.Invalidate();
   dc.DrawImage(
           this.image, 
           new Rect(0, 0, this.bitmap.Width, this.bitmap.Height));
}

Please note than this.image should be of type System.Windows.Interop.InteropBitmap to call Invalidate method. And don’t forget to call UnmapViewOfFile and CloseHandle.

Editing code dirrectly in browser (WebKit)

May be some people already know this feature, but I was discovered it only today. A lot of JS debugging bring some benefits and today I found possibility to edit code and save it on disk with Google Chrome (or any other WebKit based browser). Hope this will simplify some debugging tasks for you. See how to enable this feature bellow:

  1. Add folders(s) with your source code to browser’s workspace
    Add folder to  Workspace

  2. Allow browser to access file system (I have Ukrainian browser in English you have to click “Allow”)
    Allow access to file system

  3. Select file and map it on file system resource
    Map to file system

  4. Select resource from mapping from your workspace (added in first step)
    Select file from workspace

  5. Don’t forget to restart dev tool

  6. Edit your file in browser

  7. See changes in VisualStudio or any other IDE
    See changes in your VS

Hope this help you save some time and save Atl and Tab keys on your keyboard 😉