Category Archives: Web

JavaScript. Ugly language from past becomes straight forward technology of the future.

It’s not news anymore that JavaScript is the most cross-platform and cross-tier technology today. Let’s take a look on it. JavaScript can be efficiently used on server part of Web-server, can be used to create modern micro-services. JavaScript is the only technology for now that allows to build mobile applications for ALL mobile platforms with 100% the same code of UI and logic for every platform with Apache Cordova technology or with React Native from Facebook. JavaScript allows to build cross platform desktop applications and even more that desktop applications can be easy converted to mobile. And of course all web-browsers support JavaScript. Even some micro-controllers start to support JavaScript. Really, JavaScript is everywhere now. And it is highly efficient everywhere.

But why? Why JavaScript is so popular taking into account that there is no classic OOP support and no multi-threading support in JavaScript? It is a kind of magic when limitations become benefits. Because of limited functionality developers have to build “plugins” on another languages (for example native modules for Node.js or native modules for Cordova) and this plugins are simple units that solve simple task. JavaScript limitations enforce developers to build small simple units that do one thing but do it well. And these units are reusable. Dreams come true!

From other point of view simplicity of JavaScript keeps developers away from over-engineering.

So what is JavaScript today? It is a language to write application logic in the most simple and efficient way without mixing it with technical details.

Node.js C/C++ module is actually simple

Node.js is one of most unexpected technology for me JavaScript on server side was unbelievable thing for me 5 years ago. And to be honest JavaScript was not very straight forward technology with main goal to handle HTML/CSS based UI. But now we have several successful Node.js projects in Tesseris Pro and is seems like everything will be done with JavaScript soon. And JavaScript itself become much more serious and straight forward language. In my last post I described possible ways to run asynchronous operation in Electron.

Another problem related to that electron project was problem of creation of C/C++ module for Image Magic library. There was several modules in npm some of them was just CLI wrappers some of them was wrappers on C++ API. Both of them seems to be a wrappers created by somebody to solve their exact problem and do not solve my problems in addition CLI wrappers are slow. Thus I decided to create one more limited wrapper just for my needs – imagemagik2 hope one day I will be able to make it more or les full featured. But let me describe my experience with C/C++ Node.JS module creation…

You can find source code here: https://github.com/TesserisPro/imagmagick2

What C/C++ Node.JS modules are?

Node.JS native module is a DLL (or equivalent for other OS) file that contains some code that interacts with Node.js through V8 API. This file is renamed to *.node by specific build procedure. You will be able to manipulate JavaScript abstraction representation – crete variables, functions, object control their parameters, etc. inside your C/C++ module.

Bad news:
– V8 API is extremely over-complicated and you should be experienced C/C++ developer to use it.
– Node.js introduce additional abstraction layer Native Abstractions for Node.js (NAN) to simplify module programming that has very poor documentation, but you have to use it because without that abstraction your module may be not compatible with old or new versions of Node.js
– You have to recompile your module for every exact version of Node.js and for every platform of course. If you will try to use module compiled for another version of node even if it will have only minor changes and you are on the same platform you will see error during loading this module like “Module version mismatch. Expected 48, got 47”.

Interesting news:
– Module building tool (node-gyp) enforces you to build cross-platform module
– You will need python to build your extension. It is not a problem actually but it’s funny 🙂

Good news:
– It is not so complex as you thought first 🙂

Hello world module

You can find simple module sample in Node.js documentation.

Module startup

Seems to be obsolete

On Node.js web page you can find that entry point of your module is void Init(Local exports, Local module) function and module registration should be done with NODE_MODULE(addon_name, Init). To add module content you can just register all parts of your module by adding them to exports parameter. Another option is to overwrite whole exports by function or anything you want. Exactly the same idea as in usual JavaScript modules module.exports = {...}.

The type Local is actually object reference managed by the v8 garbage collector. According to V8 API documentation there are two types of handles: local and persistent handles. Local handles are light-weight and transient and typically used in local operations. They are managed by HandleScopes. Persistent handles can be used when storing objects across several independent operations and have to be explicitly deallocated when they’re no longer used.

Seems to be actual

According to NAN documentation entry point of are NAN macros NAN_MODULE_INIT(init){ } and NODE_MODULE(your_module_name, init) where init is just an identifier and can be changed.

To add something to your exports you can use NAN macro NAN_EXPORT(target, your_object) the most confusing part here is target you never define it. It is just naming convention defined in nan.h and node.h

// nan.h
#define NAN_MODULE_INIT(name) void name(Nan::ADDON_REGISTER_FUNCTION_ARGS_TYPE target)

// node.h
#define NODE_MODULE(modname, regfunc) NODE_MODULE_X(modname, regfunc, NULL, 0)

The NAN is full of macros that defines hidden variable and other C language objects. That makes it very hard to understand.

Full code of module startup with registration of method looks like following

#include "std.h"

NAN_METHOD(my_method) {
    // Write code here;    
}

NAN_MODULE_INIT(init) {
    NAN_EXPORT(target, my_method);
}

NODE_MODULE(my_module, init);

Creating a function

To add some functionality to our module you can create a function and register it. And here we have hidden identified again. Here is marco from nan.h

<br />#define NAN_METHOD(name)  Nan::NAN_METHOD_RETURN_TYPE name(Nan::NAN_METHOD_ARGS_TYPE info)

You can use info argument to read function arguments in following way

    double myNumberValue = info[0].As<v8::Number>()->Value(); // First argument
    Nan::Utf8String myStringValue(info[1].As<v8::String>()); // Second argument
    char *actualString = *myStringValue; //Way to access string

As you can see here we have some NAN wrappers again, and this time Nan::Utf8String is useful thing it saves several lines of code related to V8 string implementation.

To send result of your calculations back to JavaScript world you can set return value by following code

// Create new V8 object
v8::Local<v8::Object> result = Nan::New<v8::Object>();

// Set some object fields
Nan::Set(result, v8::String::NewFromUtf8(Nan::GetCurrentContext()->GetIsolate(), "my_field_name"), Nan::New((int)my_filed_value));

// Set object as return value
info.GetReturnValue().Set(result);

Here you can find v8::String::NewFromUtf8, unfortunately I did not found a way to create string with NAN, so I have to do it with V8 API. Also good point here is Nan::GetCurrentContext()-&gt;GetIsolate() that method returns object of type Isolate* that object is required for most V8 API calls and represent something like V8 managed heap – a space where all variables live and die with GC.

Using async workers

In most cases you want to create asynchronous functions to not block node.js main tread. You can use general C/C++ threads management but V8 and Node.js are not thread safe and if you call info.GetReturnValue().Set(result) from wrong thread you can damage data and you will get exception for sure.

NAN introduce Nan::AsyncWorker a class with several virtual methods that should be overridden to crate async operation and simplify dispatching results back from another thread. Most important are HandleOKCallback, HandleErrorCallback and Execute methods. Execute method is running in separate thread and perform asynchronous operation. HandleOKCallback is called in case Execute finished without problems. HandleErrorCallback will be called in case of error in Execute method. Thus to implement asynchronous operation with callback you can inherit Nan::AsyncWorker class and override virtual methods in following way.

class Worker : public Nan::AsyncWorker {
    public:
        Worker(Nan::Callback *callback) : AsyncWorker(callback) 
        {

        }

        void Execute() 
        {
            if (do_my_asyc_action() != MY_SUCCESS_VALUE) 
            {
                this->SetErrorMessage("Error!!!");
            }
        }
    protected:
        void HandleOKCallback()
        {
            v8::Local<v8::Value> argv[] = { 
                v8::String::NewFromUtf8(Nan::GetCurrentContext()->GetIsolate(), "Some result string")
            };

            // Call callback function
            this->callback->Call(
                1,     // Number of arguments
                argv); // Array of arguments
        }

        void HandleErrorCallback()
        {
            v8::Local<v8::Value> argv[] = { 
                v8::String::NewFromUtf8(Nan::GetCurrentContext()->GetIsolate(), this->ErrorMessage()) 
            };

            // Call callback function with error
            this->callback->Call(1, argv);
        }
};

Building your module

The build system configuration is more or less simple. You should create a JSON file named binding.gyp and add your source files and other options to this json file. The build will be always just compilation of your C/C++ files. The node-gyp will automatically prepare building configurations for every platform during module installation. On Windows it will create solution/project files and build your module with Visual Studio, on linux it will prepare makefile and build everything with gcc. Bellow you can find one of the simplest binding.gyp file.

{
  "targets": [{
                "target_name": 'my_module',
                "sources": [ "main.cpp" ],
             }]
}

Additionally you can configure specific options for specific platforms. You can find more about node-gyp here https://github.com/nodejs/node-gyp.

To build your module automatically during installation add following script section to package.json

"scripts": {
    "install": "node-gyp rebuild"
}

To build your module during development you can use node-gyp command with build/rebuild parameter.

Conclusion

The V8 and NAN API are complicated and have not very detailed documentation thus my idea is to keep C/C++ module as simple as possible and use only methods and async workers without creation of complex JavaScript structures through API. This will allow to avoid memory leaks (in some cases it is very hard to understand how and when you should free memory from docs) and other problems. You can add complicated JavaScript wrapper on your simplified async methods inside your module and create rich and easy to use API. I used this approach in my module here https://github.com/TesserisPro/imagmagick2

Some python fun for presentations

The post contains ultimate off-topic, so source code is here 🙂 : https://bitbucket.org/dpeleshenko/mdshow

Except managing managing my company Tesseris Pro and working as a developer I am working with student in one of Kharkiv university. That is a good way to find young talents and prepare them to work in our company.

So I am very often preparing some technical presentations. I have a huge archive of presentations form different technologies. When I prepare my lectures I have to merge this presentations update obsolete data and so on. I was always using MS PowerPoint. But reworking presentation by drag’n’drop is terrible. Additionally I have problems with different slide design. So I am completely unsatisfied with existing mouse driven presentation software.

As a developer I prefer to write everything problem oriented language. First idea was HTML… but it’s too complex… too many letters. LaTex, god but too complex too. I have no math symbols and other similar things in my presentation. Much better is Markdown. After that presentation done with markdown become my dream. And one of important thing was to have every slide in separate file and have separate file for general presentation layout and design. There are some open source solution but none of them was good enough for me. So I decided to make it myself.

My first idea was to convert markdown to HTML and render HTML with CEF. And taking into account that I have another dream – to develop cross platform desktop applications technology with Python business logic and HTML/CSS/JS presentation. I selected python and CEF as main technology stack. But unfortunately all CEF binding for python are terrible. I spent whole day trying to run demo code without any success.

After some additional research I have found very good library to statically render HTML/CSS – http://weasyprint.org/ That was completely enough for me. You can find a python script with about 120 lines to show presentations based on directory with markdown files here: https://bitbucket.org/dpeleshenko/mdshow

To run this code you will need to install following:

I have tested everything in Ubuntu 15.10 but everything should work in any Linux, Mac or Windows. Maybe some fine-tuning with GTK will be required.

DNX, .Net Core, ASP.Net vNext, who is who?

I’m writing this blog after we have done several projects (some of them were commercial, some internal) with these technologies at Tesseris Pro and discovered a lot of things that are not covered by documentation.

Let’s try to understand the place of every project on the global picture

Many of us already know about new version of .Net. There are a lot of resources saying that it will be open source, will run on Linux and OS X without mono and a lot of other things. And some statements can disappoint because they conflict with each other.

Let’s review available projects

At first let’s understand what DNX and .Net Core are and how they relate to each other.

  • DNX is a Dot Net Execution Environment. As ASP.Net vNext says it’s “…a software development kit (SDK) and runtime environment that has everything you need to build and run .NET applications for Windows, Mac and Linux … DNX was built for running cross-platform ASP.NET Web applications…”. And that’s right with dnu (Dnx utility) you can build projects.
  • .Net Core is a “… cross-platform implementation of .NET that is primarily being driven by ASP.NET 5 workloads… The main goal of the project is to create a modular, performant and cross-platform execution environment for modern applications.”(see .Net Core)

Hm… two projects from MS with the same goals and the same features. And yes that’s true DNX and .Net Core currently give us almost the same functionality. And these two sites together with ASP.Net and VS Code web site bring a lot of misunderstanding about what the next .Net next version is. What is the reason for it? The answer is here (https://github.com/dotnet/cli/blob/master/Documentation/intro-to-cli.md) “We’ve been using DNX for all .NET Core scenarios for nearly two years… ASP.NET 5 will transition to the new tools for RC2. This is already in progress. There will be a smooth transition from DNX to these new .NET Core components.” Looks like DNX will be replaced by tools from .Net Core.

Ok what about .Net Framework 4.6 and Mono? .Net Framework (https://www.microsoft.com/net) will continue its evolution as framework with WPF and other windows specific stuff and it will be compatible with .Net Core. It means that it will not duplicate core functionality but instead of it will offer additional services. And as it was before most interesting things like WPF will be only MS Windows compatible. The same story with mono I think.

Let’s summarize

.Net Core – set of cross-platform tools to build applications, cross-platform execution tools, set of cross-platform core libraries (like System, System.Runtime, System.IO, etc.)

DNXobsolete (at least for ASP .Net 5) set of cross platform tools and runtime environment almost the same feature-set as in .Net Core

.Net Framework – set of libraries to develop windows desktop and web application some of assemblies may be cross-platform as far as assembly format is the same in all described technologies.

Mono – set of libraries that partially replaces .Net Framework under Linux and OS X, execution tools and build tools.

Assembly format is the same so mono can execute Core or Framework assembly and vice versa. The most significant problem except P-Invokes to Win API is references. Currently all described frameworks have different functionality distribution across assemblies. So sometimes you will not be able to start application because application will search for class C in assembly A, but in actual runtime class C will be located in assembly B.

Some additional notes about build tools

Both .Net Core and DNX have new project file format project.json. It aimed to use file system structure as project structure and allows to build application for different platforms at the same time. As result you will have set of assemblies that are referencing correct assembly for every class.

Both tools work on Linux and OS X (OS X was not tested by me yet).
One of significant problems is debugging under Linux now. To debug application we need .pdb (.mdb) file that binds binary assembly with source code files. DNX tools are not able to produce any debug files, .Net Core tools can produce *.pdb files but VS Code and MonoDevelop need *.mdb under Linux to debug. So now it’s better to use mono under Linux if you would like to debug 🙂 Even if you are going to use VS Code.

One of important things is that .Net Core build tools can produce native small Linux executable to start application without “mono app.exe”.

My next blog will be about build tools and how to setup build environment under Linux.

Editing code dirrectly in browser (WebKit)

May be some people already know this feature, but I was discovered it only today. A lot of JS debugging bring some benefits and today I found possibility to edit code and save it on disk with Google Chrome (or any other WebKit based browser). Hope this will simplify some debugging tasks for you. See how to enable this feature bellow:

  1. Add folders(s) with your source code to browser’s workspace
    Add folder to  Workspace

  2. Allow browser to access file system (I have Ukrainian browser in English you have to click “Allow”)
    Allow access to file system

  3. Select file and map it on file system resource
    Map to file system

  4. Select resource from mapping from your workspace (added in first step)
    Select file from workspace

  5. Don’t forget to restart dev tool

  6. Edit your file in browser

  7. See changes in VisualStudio or any other IDE
    See changes in your VS

Hope this help you save some time and save Atl and Tab keys on your keyboard 😉

JavaScript and different ***Script

Today I’ll just share my opinion so, no useful information bellow 🙂

There are a lot of discussions around TypeScript, CoffeeScript and other languages translated to JS. Let me add several words to this “holly war”:) I was codding for a couple of months with TypeScript. It was complex UI with Knockout.js and Durandal. Yesterday I switched to pure JavaScript task. I have to help on other our project and I have to write very simple JS code, with just several functions and some calculations.
Just imagine! My performance with JavaScript is at least twice lower than with TypeScript.
Why?
Because “continuous refactoring”, one of key principles to get good code quality, not just good looking code but stable code, is very problematic with JS. You have to check everything (variable names, functions, scopes, etc.) when you changed something in existing code.
My opinion is that Human was not designed to handle so stupid tasks 🙂 Let machines do their work use languages with checks on compilation. And use smart IDE of course 😉
Be Human…

Template project for Node.js with Express

Overview

If you are building node.js applications you may need some template project to start quickly and do not perform simple configuration every time. This configuration may contain access to MongoDB, security infrastructure and so on. My colleague Anton Dimkov has committed his template on Github. Also the template can be installed with npm.

Features

  • Configured simple routes for express
  • Integrated MongoDB access with mongoose
  • Integrated security with password hashing based on bcrypt-nodejs
  • Login page
  • Session management with express-session
  • Logging with winston
  • Integrated Angular and simple SPA front-end structure
  • Bootstrap styles integrated

Installation and configuration

  1. Install Node.js according to instructions at https://nodejs.org/
  2. Install and run MongoDB according instructions at http://www.mongodb.org
  3. Download code from git hub
  4. Install all required modules by runing npm install in terminal
  5. Run application with node www/bin
  6. Open http://localhost:9090/ in web-browser

Try it it’s simple way to start using Node.JS correctly or to start a project.

Explicit call to RequireJS in TypeScript

I’m working on mobile application with Apache Cordova (http://cordova.apache.org) technology. And one of the tasks was to load JSON localization file from content folder. First idea and actually most correct (as for me) is to load file with RequireJS text plugin (https://github.com/requirejs/text).
Plugin allows to load text file in the same way as usual modules and do not evaluate content, but return it as a sting. So you just specify something like following.

require(["some/module", "text!some/module.html"],
    function(module, html, css) {
        //the html variable will be the text
        //of the some/module.html file
    }
); 

When using TypeScript we can write

import html = require("text!some/module.html");

document.body.innerHTML = html;

And this will give us following JS code of the module (In case of AMD mode in compiler)

define(["require", "exports", "text!some/module.html"], 
       function (require, exports, html) {
            document.body.innerHTML = html;
       });

Unfortunately it’s not enough in case of localization, because we have to load specific html file for specific locale (text!/locale/en/module.html). And we have to select the path dynamically depending on selected locale. In JS we can write following.

define(["require", "exports","someService"], 
       function (require, exports, someService) {
            var locale = someService.getLocale();
            require(["text!/" + locale + "/en/module.html"], 
                    function(html){
                           document.body.innerHTML = html;
                    });
       });

First it was not absolutely clear for me how to do in TypeScript. There is no explicit import of RequireJS in typescript and require is a keyword used as part of module declaration. I’ve tried to find some description for my case, but without any success. Fortunately solution is much simpler that I thought:
1. You should add requre.d.ts typing to your project (or to compiler command line)
2. Than just write following TypeScript

    var locale = someService.getLocale();
    require(["text!/" + locale + "/en/module.html"], 
            html => document.body.innerHTML = html);

And this will give exactly the same code as in JS sample above. And you can ignore editor warning about keyword require, just compile the project and you will get no errors. Please note that I’ve tested this with TypeScript 1.4 compiler and with MS Visual Studio 2013. And don’t forget to use array of string as first argument of require, but not string as in another require syntax.

If you have any other idea how to make it working please add comments.

Fornt-end with Knockout.js, require.js and TypeScript

Let’s talk about how to correctly organize front-end with Knockout.js require.js and TypeScript.

The problem

If we will read TypeScript handbook we will find a lot information about how to load modules with AMD and require.js, and everywhere in samples we will find something like this

    import module=require('./module');

But in real application we always have some folder structure to keep the files organized, we are using different package managers and so on, thus in most cases import should look like

    import ko=require('./node_modules/knockout/build/output/knockout-latest')

Unfortunately for some unknown reason this is not working with TypeScript, at least with versions 1.3 and 1.4. Really, why current folder path is working but more complex path is not? We have to deal somehow with this.

An the only way is to use import ko=require(knockout) instead of full path to knockout.

In this post I will describe the way how to build HTML application with MS VisualStudio and I will use node package manger to load all the libraries, but the same idea will work for nuget or any other package managed and IDE.

Application structure

  • node_modules
    • knockout
      • build
        • output
          • knockout-latest.js
    • requirejs
      • require.js
    • moment
      • moment.js
  • typings
    • knockout
      • knockout.d.ts
    • moment
      • moment.d.ts
  • config.js
  • application.ts
  • mainViewmodel.ts
  • bindings.ts
  • index.html

Require.js enabled JavaScript (or TypeScript) application should start with single “ tag in html. In our case it looks like:

    <script data-main='config.js' src='node_modules/requirejs/require.js'></script>

This config.js is the only JavaScript file, all other logic is done in TypeScript. May be there is some way to write it on TypeScript, but I’m not sure that it makes any sense, because you have to do JS specific low level things here. The config.js looks like following:

    require.config({
        baseUrl: "",
        paths: {
            knockout: "./node_modules/knockout/build/output/knockout-latest",
            moment: "./node_modules/moment/moment"
        }
    });

    define(["require", "exports", 'application'], function (require, exports, app) {
        app.Application.instance = new app.Application();
    });

First of all in this file we are configuring require.js to make it understand where to search for libraries. We will load our index.html from file system and of course in real app you should not use folder structure but think about URLs. Please note that you should not specify file extension.

Now require.js will understand how to load knockout. But this will tell nothing our TypeScript compiler and compiler will report errors about undefined module.

To fix this problem with compiler simply add corresponding typings to the project. Now TypeScript will build everything without errors. Please note that in this case TypeScript will not verify correctness of path to modules because it can’t determine the real URL structure of the application. That may be the reason why complex path is not working in import.

Note: don’t forget to switch TypeScript module type to AMD (Asynchronous Module Definition). This will conflict with node.js and next time I will explain how to deal with node.js and AMD.

Application startup

Our application entry point (after config.js) is application.ts file with following content:

    import vm = require('mainViewModel');
    import ko = require('knockout');
    import bindings = require('bindings');

    export class Application{
        public static instance: Application;

        constructor(){
            bindings.register();
            ko.applyBindings(new vm.mainViewModel());
        }
    }

Here we load module(s) (as dependency) with all custom bindings, create main view model and apply it to whole page.

Note that we don’t need to specify path to bindings and mainViewModel in config.js because they are located at the same directory. You can use more complex structure and everything will work with TypeScript just don’t forget to explain require.js how to find all your modules.

Custom bindings

Custom binding are wrapped in single module and can be loaded as any other module. Binding handlers will be registered with bindings.register() call. This can be done with following content of bindings.ts:

    import ko = require("knockout")
    import moment = require("moment")

    export function register(): void {

        ko.expressionRewriting["_twoWayBindings"].datevalue = true;

        var formatValue = function (value, format) {
            format = ko.unwrap(format);
            if (format == null) {
                format = "DD.MM.YYYY";
            }
            return moment(ko.unwrap(value).toString()).format(format);
        }

        ko.bindingHandlers["datevalue"] = {
            init: function (element: HTMLInputElement, valueAccessor, allBindings, viewModel) {
                element.value = formatValue(valueAccessor(), allBindings.get("dateFormat"));

                element.addEventListener("change", function (event) {
                    var dateValue: any
                        = moment(element.value, ko.unwrap(allBindings.get("dateFormat")))
                            .format("YYYY-MM-DD") + "T00:00:00";

                    if (ko.unwrap(valueAccessor()) instanceof Date) {
                        dateValue = new Date(dateValue);
                    }

                    if (ko.isObservable(valueAccessor())) {
                        valueAccessor()(dateValue);
                    }
                    else {
                        allBindings()._ko_property_writers.datevalue(dateValue);
                    }
                });
            },
            update: function (element: HTMLInputElement, valueAccessor, allBindings) {
                element.value = formatValue(valueAccessor(), allBindings.get("dateFormat"));
            }
        }
    }

Here we create very useful datevalue binding, which allows to edit and display dates as string in specific format. This binding is able to work with observables and flat values and store date in JSON compatible format or Date, depending on initial value of bound property. This binding contains some knockout and TypeScript tricks like ko.expressionRewriting["_twoWayBindings"].datevalue = true and allBindings()._ko_property_writers.datevalue(dateValue) but let’s talk in next blog posts about these tricks.

ViewModel

Nothing special just usual view model organized as module

    import ko = require('knockout');

    export class mainViewModel{

        constructor(){
        }

        public name = ko.observable("John Doe");
        public birthday = ko.observable("1983-01-01");
    }

Conclusion

Everybody are waiting for ECMASript 6 support in all browsers with all sweet things like classes, arrows, modules and so on. Life is too short to wait – let’s use TypeScript today! I’ve tested it in big project and yes sometime it looks a little raw but it’s working and make our life easier with type check and better intellisense.

Knockout.js components. What they are and what they are not

What are components?

Some time ago Knockout.js team released new feature – components. This feature allows developer to build some custom components that will have it’s own view and logic. Registration of component looks almost like binding registration

ko.components.register('mywidget', {
    viewModel: function(params) {
        //Define view model here
        this.title = ko.observable("Hello from component!!!");
    },
    template: '<div data-bind="text: title"></div>'
});

The example below looks very similar to definition of user control in technologies like WPF/Silverlight or even WinForms. We have template to define view of element and view-model to define logic.

Most interesting (for me personally) is usage of these components as custom elements – custom HTML tags. after registrations of my widget in previous examaple you can write following in your HTML code:

<mywidget></mywidget>

Brief description (skip it if you already familiar with components)

And this HTML tag will be replaced with template of the component with applied component view model.

Template can be defined in following way:

  • With existing element id. Template element can be any existing element div or template or anything else.
template: { element: 'my-component-template'}

Element content (only children of element) will be cloned to place where you apply your custom element

  • With exiting element instance
template: { element: getElementById('...') }
  • Directly with string of markup
template: '<div data-bind="text: title"></div>'
  • With array of element instances (elements will be add sequentially)

  • With AMD (Asynchronous Module Definition)

template: { require: 'text!some-template.html' }

Require.js or any other AMD module loader can be used. See https://github.com/amdjs/amdjs-api/blob/master/AMD.md for details.

For view-model configuration you can use following:

  • Constructor function
viewModel: function(params) {
    //Define view model here
    this.title = ko.observable("Hello from component!!!");
}
  • Existing instance of view model
viewModel: { instance: viewModelInstance }
  • Factory function
viewModel: {
              createViewModel: function(params, componentInfo) { 
                                  return new ViewModel(params); 
                               }
           }

Here we have additional parameter componentInfo. This parameter allow us to get access to our custom element with componentInfo.element. But unfortunately we can’t get access to this element before template is applied and analyze it as it was initially added to the document. I’ll describe why I’ve said unfortunately a little later

  • Load view model through AMD
viewModel: { require: 'some/module/name' }
  • Specify whole the component as single AMD module
define(['knockout'], function(ko) {
    return {
        viewModel: function(params) {
           this.title = ko.observable("Hello from component!!!");
        },
        template: '<div data-bind="text: title"></div>'
    };
});

And register it with

ko.components.register('my-component', { require: 'some/module' });

What components are not?

Let’s assume we would like to build a component for bootstrap button with popover. And we would like to open this popover when button is clicked and when another button inside popover is clicked we would like to call some handler in view-model. Something like button with confirmation. And we would like to add custom confirmation template with elements bound to view model.

donate-btn

It would be nice to have component with following syntax

<popover text="Donate" 
		 data-bind="command: makeDonation"
		 title="Enter amount of donation">
	<input class='form-control text-right' type='text' 
               data-bind='value: donationAmount' />
</popover>

But unfortunately it’s not possible. There is no way to read HTML content of the component applied as custom HTML tag, because everything view-model factory, view-model constructor and all other functions are called when template is applied to component and template is required parameter.
Thus you can’t build custom controls with templates inside. Only one possible option is to specify template id as parameter of the your custom control.

<template id='donate-template'>
    <input class='form-control text-right' type='text' 
           data-bind='value: donationAmount' />
</template>

<popover text="Donate" 
		 title="Enter amount of donation" 
		 data-bind="command: makeDonation" 
		 template="donate-template"></popover>

Or use usual binding instead of component to specify template control

<div class="btn btn-xs btn-primary">
   <div data-bind="popover: {title:'Enter ammount', command: makeDonation}"
                   class="hidden">
	 <input class='form-control text-right' type='text' 
                data-bind='value: donationAmount' />
   </div>
   Donate
</div>

More or less equivalent code but imagine how useful this “inline templateing” can be for controls like this http://grid.tesseris.com/Home/Documentation#!/General/general

Let’s hope for the future versions…