December 19, 2013

HTML5 Canvas: State Stacking

Stacking states with Canvas

Introduction

This is the first post of a series about HTML5 Canvas. In this article I will explain the meaning of context.save() and context.restore(). Simply said, these methods are reponsible for stacking the contexts states. But what does it mean?

What are states in Canvas?

First, it is necessary to know, what states are. The simplified answer is: Everything that does not draw!
The Canvas API provides a set of methods which can be distinguished between drawing methods and auxiliary methods. A good part (not all) of these auxiliary methods are used to define the appearance of drawn areas, and/or paths. For example, strokeStyle, fillStyle, globalAlpha, lineWidth, shadowOffsetX, shadowOffsetY, shadowBlur, shadowColor, font, textAlign, textBaseline etc. are all methods which modify states. Also considered as a state is the transformation matrix, which is modified by the methods translate, rotate, scale, setTransform. Another kind of state is a defined clipping region, modified by clip; Everything modified by these methods are states, which can be stacked.

What can I do with stacking?

Obviously, it is easy to recover already set states by simply popping it from the stack, because sometimes it is quite cumbersome to define a proper state. This also keeps your code cleaner. Stacking can even improve runtime performance, as demonstrated here at JsPerf. Another important advantages is the "isolatation" of state dependent operations. In the next paragraph I'll explain this concept more precisely.

Isolation of state dependent operations

Using state stacking you can isolate some and group other state operations quite easily. Imagine a car whose wheels shall rotate, while the car is moving forward. You can isolate the rotation of the wheels during their painting by stacking the "translation matrix" and apply the rotation. Afterwards, you restore the "translation matrix" and paint the next frame. The following snippet demonstrate this principle by transforming texts. Here is the visual result.
And here comes the code.
function main(){
    var context = document.getElementById('myCanvas').getContext('2d');
    
    var painter = new Painter(context);
    
    painter.setFillColor(255,0,0,1);
    painter.drawText("Text 1", 50);    
    painter.pushState();    
    painter.rotate(320, 100, 45);
    painter.setFillColor(0,0,255,1);
    painter.drawText("Text 2", 100); 
    painter.popState();
    painter.drawText("Text 3", 150);                
}
function Painter(ctx){
    var context = ctx;
    var DEG2RAD = Math.PI/180.0;
    var center = {};
        
    var init = function(ctx){
        context = ctx;
        center[0] = context.canvas.width/2;
        center[1] = context.canvas.height/2;
    };
            
    this.pushState = function(){
        context.save();
    };
    
    this.popState = function(){
        context.restore();
    };
    
    this.rotate = function(posX, posY, angle){
        context.translate(posX, posY);
        context.rotate(angle * DEG2RAD);
        context.translate(-posX, -posY);
    };
    
    this.setFillColor = function(r,g,b,a){
        context.fillStyle = "rgba(" + r + "," + g + "," + b + "," + a +")";
    };
    
    this.drawText = function(text, ypos){        
        context.save();
        context.font = "30px Arial";
        context.textAlign = "center";
        context.fillText(text, center[0], ypos);
        context.restore();              
    };
        
    init(ctx);
}

December 12, 2013

Assertions are not bad

Or: I assert that assertions keep the code cleaner

Let's start with a simple example:

public class MyController : ApiController{
  
  public MyController(Repository repository){
    this.repository = repository;
  }

  [HttpPost]
  public MyEntity Create(MyEntity myEntity){

    Debug.Assert(myEntity != null, "Entity must not be null");

    this.repository.Add(myEntity);
    this.repository.Commit();

    return myEntity;
  }

  /* ... */
}
public class MyEntityRepository : Repository<MyEntity>{

  private Dictionary myEntities = new Dictionary<MyEntity, State>();

  public MyEntityRepository(connectionString) : base(connectionString){

  }

  public void Add(MyEntity myEntity) {
    Debug.Assert(myEntity, "Entity must not be null");
    myEntities.Add(myEntity, State.Added);
  }

  public void Delete(MyEntity myEntity) {
    Debug.Assert(myEntity, "Entity must not be null");
    myEntities.Add(myEntity, State.Deleted);
  }

  public void Commit(){

    /*
    Code responsible for persisting.  
    */
  }
}

Most part of the code is self-explaining, but it is necessary to explain the application context for these snippets. MyController is considered to be a web service which is triggered by HTTP-Requests (here expressed by C# WebAPI-like implementation). MyEntityRepository is part of a persistence layer. It just demonstrates the intention for persisting objects. Apparently, both classes make use of the same kind of assertions, but the assertion in MyController is terribly wrong.

Why the assertion in MyController is terribly wrong?

First of all, I do not complain the 'duplicated' assertions. I assume that the user of MyEntityRepository has no code insight, and does not know what's going on inside as the repository class is designed to be reusable. There are other reasons, why I complain the really bad use of assert. Most obviously, it kills the application (a web service!) when the argument is null. Ok, this assertion won't trigger in production code, but that is not the point. The intention of its use is wrong. Most probably someone passes invalid arguments to a web service, which will be serialized to null. The assertion is used as exception handling. The only good thing is, that it won't be triggered in the release version. So, this assertion is crap.

Why the assertions in MyEntityRepository are admissable?

One might tend to allow null for Add and Delete. In the method Commit then all null-entries will be skipped. The code seems more robust then. But, there are some drawbacks when doing so. As an old C++ coder I would say: Waste of memory. Why shall I add something to a list, what I'll never use? Just because a dumb coder used my methods in the wrong way? But memory doesn't matter nowadays (*sigh*). Talking about clean code, we can see that the complexity of Commit grows. Additionally, we delegate the responsibility to another part of our code. Commit tries to deal with erronous additions and/or deletions. Another point is, that the assertion statements in the methods Add and Delete explain the code in a comprehensive way. It is like a functional comment that guarantees a correct logic. This guarantee is intended for the user of the code, not the user of the application.

Assertions are for programmers, not for users

Priorly, I said the intention of use is wrong, but it is more than that. It is about the adressees. Assertions do not adress user, they adress programmer. Assertions help the programmer to avoid logical errors. They apply only for the development cycle, when the programmer deals with debug builds. They are a guidance for the programmer that use the code.
When somebody uses the Repository class, he will be informed that null is not allowed. He knows that he made a severe programming mistake and will change his code according to our assertion. This is the true intent of an assertion.

Assertions are not Exception Handling

Exception handling gives control about undesired but possible occurances during runtime. It is about behavioural failure at runtime, while assertions are about programming failures. It is important to emphasize that exception handling allows to define an appropriate behaviour in case of runtime problems. Assertions do not give any control about behaviour at all. Assertions simply exit the application in a rude way. Exception handling remains in debug and release builds. Assertions apply for debug builds only (at least in C, C++, C# - Java treats assertions slightly different). Usually, Exception handling bubbles from the code abysses up to the surface. In any case they should be handled to keep the application running, or at least exits in a controlled way. Assertions do not bubble, and cannot be handled. Therefore, they are not testable (see paragraph below). As one can see, exception handling and assertions are different but complementary strategies to create better and more robust programs. It is important to know, when to use assertions. The following image depicts it in a general form.

Assertions are not testable

While exceptions are specific events that can and must be handled, assertions cannot be handled and tested. For many coders this is a no-go, and so the do not use assertions. But those who complain the non-testability do not clearly understand assertions. Tests are done against functionality and behaviour. If a test ends in an assertion, then it is simply a programming error. Maybe the test is written wrong, or the assertion in the tested code is wrong. Assertions are per definitionem not intended to be testable.

When to use Assertions?

As the image shows, assertions are for developers, and also is exception handling. How do you know when to use one or another?
First of all, use them wisely. Extensive and unnecessary use of both, assertions and exception handling, clutters code. Usually, there are much less assertions in a code, than exceptions. The following list should help to determine when to use assertions.
  • A situation that shall never ever occur should be protected by an assertion
  • Use assertions only in code parts, which are intended for programmers
  • Think of an assertion as a functional comment, and/or guidance for other developers
  • Assertions trigger only in debug builds.

November 08, 2013

Generate Oracle-Database from EDMX

How to create an Oracle database using the ADO.NET Entity Data Model Designer

The context

Our current project is about a central register of vehicles which need to be transferred into a legacy system once created. The integration service will replicate the entries into the clients huge database landscape (kind of multiplexing). For developing purposes, it would be ideal to have a cloned system in our house, but because of the landscapes size we cannot simply dump the system and clone their ambient into our developing ambient. Furthermore, we have only very few tables for the vehicle register.
The idea is to:

  1. generate entities from the existing database the client provides (Oracle DB).
  2. create a database in our ambient, only focused on the necessary entities.
The starting point for this post is an already created conceptual model using the ADO.NET Entity Data Model generator.

Prerequisites

For this tutorial you need at least MS Visual Studio 2010 and the Oracle Developer Tools for Visual Studio installed on your machine. I suppose that you have already a conceptual model at hand. So, you find yourself in the following situation:

Setting up Oracle PL/SQL generator

To generate PL/SQL you need to enter the properties of your data model (right click in the designer). At least, you need to setup up the Database Generation Workflow and DDL Generation Template like it is depicted below. You may want to setup other things, most probably the Database Scheme Name.

Generate Oracle PL/SQL

So, you're nearly done. Just open the context menu in the designer and select Generate Database from Model..., which immediately opens a generated PL/SQL. When I try to finish the process my VS hangs. So, I simply copied the text directly in sqldeveloper and executed the script without any problems.
That's all. Nothing more, nothing less.

October 25, 2013

Converting a VS Class Library Project to Web Application Project

Without recreating the project!

Today we discovered that a recently created VS2010 project was not setup as Web Application. When we tried to run the debugger VS complained that libraries cannot be executed as application. I found out that under the project's properties the tab "Web" was missing. I opened another project to compare the settings within VS, but could not figure out obvious differences.

First, I asked Google, which told me to have a look at MSDN, where the Redmonders suggest in a conversion walk-through to create a new project and copy the content from the old project into the new one. Yeah, but we all know that this could mess up our references, even if we use NuGet. To be honest, I started to follow that hint, and find myself in such a painful situation. After a few minutes, I abandoned this idea. That way is only for masochistic lifeforms.

I diff'ed (using SVN Tortoise Diff) both project files and then changed the compromised project file manually. I played around a bit and discovered, that I need to add the ProjectTypeGuids-Tag only to get things work.

So instead of cumbersome project recreation it needs less than a minute to convert the pure library project into a web application project.

September 28, 2013

Released first stable version of js-parallax

Today I committed a stable version of the already announced jQuery plugin for retro-style parallax scrolling at Google Code. The plugin brings a very easy to use API to include a multi-layer image scroller on your sites.

Usage Example

<html>
    <head>
        <title>PARALLAX SCROLLER</title>
        <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js" type="application/javascript"> </script>
        <script src="http://parallax-js.googlecode.com/svn/tags/3/js/jquery.parallax.min.js" type="application/javascript"> </script>
        
        <script type="application/javascript">
         
            $(function() {
                $(".scrollerNew").parallax();
        });
        </script>
        
    </head>
    <body>
        <div class="content">Content</div>
        <ul class="scrollerNew">
            <li data-img='img/background.jpg' data-width='549px' data-height='168px' data-xi='6' data-repeat="repeat"></li>
            <li data-img='img/layer1.png' data-width='549px' data-height='168px' data-xi='10' data-repeat="repeat"></li>
            <li data-img='img/layer2.png' data-width='549px' data-height='168px' data-yi='14' data-repeat="repeat"></li>
        </ul>        
    </body>
</html>

The plugin and more detailed description are available at https://code.google.com/p/parallax-js/. More examples will be added soon. Not worth to mention, that the gadget in this blogs header uses js-parallax.

September 19, 2013

Close the tags, but close them fucking correctly.

Or: Do not mix HTML with X(HT)ML

Yesterday, our team struggled over a broken layout on IE8, while it all works well on FF, IE9+, Chrome, and Safari.
Initially, we thought we could fix it easily adapting the CSS, e.g. replacing display:inline-block by floating elements. This did not work. So, we checked the HTML structure for some missing close tags...everything looked correct. We started to be dispaired, as our ideas faded, and our knowledge came up to an end. Luckily, I saw a small warning showing up in the developer console in Browser Mode IE10 (not IE8), which seemed quite strange to me, as I did not find any problem within the referenced code.

The complete message (in portuguese) is the following: HTML1523: Marca de fim com sobreposição. As marcas devem ser estruturadas como "<b><i></i></b>" em vez de "<b><i></b></i>". It complains allegedly overlapping tags within the following HTML:

<div class="login">
    <asp:Label runat="server" ID="lnkLogin">Login<span class="seta-baixo"/></asp:Label>
    <asp:LinkButton runat="server" ID="lnkLogout">Logout</asp:LinkButton>
</div>

Do you see any problem?

Syntactically, everything seems fine, or not? Well, there is a small detail. I remembered that once I had problems with inline closed script tags. That is, when a script tag is closed in XHTML/XML manner, the script is not loaded correctly and cannot be used. That is because syntax of two similar but not identical languages are mixed up. Just to remember:

HTML is not XHTML(nor XML)!

You will run into problems, when mixing up XML-Syntax in HTML. Some browsers can deal with it, but others not. The solution was really simple. We closed the span tag the correct HTML way. You could say "Use the correct doctype definition to avoid the problem", and you would be right. But it is more bulletproof to use the HTML style, as it is also valid for XML!
<div class="login">
    <asp:Label runat="server" ID="lnkLogin">Login<span class="seta-baixo"></span></asp:Label>
    <asp:LinkButton runat="server" ID="lnkLogout">Logout</asp:LinkButton>
</div>

With the correction the layout works on all browsers as expected, even on IE8.
So, before you get nuts, and spent valuable time, remember to correctly close tags in HTML style.

September 14, 2013

Announcement: jQuery Plugin for Retro-Style Parallax-Scrolling

Currently, I am playing around a little bit with a new code editor called Brackets, which seems pretty cool. Maybe I will write something about it in the near future. Here I like to talk about the mini-project I chose to get familar with Brackets. I am developing a tiny jQuery extension/plugin/whatever for simple parallax scrolling. "Wow!", you say now, "Isn't there a bunch of them already out there? Why are you reinventing the wheel?" Yeah, you're right. But I am thinking about the oldschool parallax scrolling used in the 80th and early 90th in games, like Turrican, Shadow of Beast, Mario Bros., etc. That hippie pippie new school parallax is trendy, ... and others have done it already. I simply developed that toy mainly for training purposes, and people who are interested in it. Here is a scratch, on how to use it.
<html>
    <head>
        <meta charset="utf-8">
        <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
        <title>PARALLAX SCROLLER</title>
        <meta name="description" content="Testbed for parallax scroller development.">        
        <link rel="stylesheet" href="css/parallax-demo.css">    
        <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js" type="application/javascript"> </script>
        <script src="../trunk/js/jquery.parallax.min.js" type="text/javascript"> </script>
        <script type="application/javascript">
            
            function sinEaseFunc(elapsedSeconds, step){                                    
                    return Math.sin(elapsedSeconds/10)*step;
                }
                
            function cosEaseFunc(elapsedSeconds, step){                                    
                    return Math.cos(elapsedSeconds/10)*step;
                }
            
         
            $(function() {
                $(".scrollerNew").parallax([
                    { xf : sinEaseFunc , yf : cosEaseFunc }
                ]);
        });
        </script>
        
    </head>
    <body>
        <div class="content">Content</div>
        <ul class="scrollerNew" data-fps='25'>
            <li data-img='img/tex5.jpg' data-width='549px' data-height='168px' data-xi='150' data-yi='150' data-repeat="repeat"></li>
            <li data-img='img/tex3.png' data-width='549px' data-height='168px' data-xi='12' data-yi='4' data-repeat="repeat"></li>
            <li data-img='img/texttex.png' data-width='549px' data-height='168px' data-yi='-10' data-repeat="repeat-y"></li>
            <li data-img='img/aufgerissen.png' data-width='549px' data-height='168px' data-repeat="no-repeat"></li>
        </ul>        
    </body>
</html>
As you may imagine, there are some interesting features for this scroller. It is/has
  • Easy to use
  • Time based animation, with configurable "Frames per second"
  • Highly customizable
  • Even non-linear translation possible
Currently, it is in an unstable development phase. But it is already worth to announce its existance. If you are interested feel free to try it. It is available at code.goggle. I plan to pimp this blog with that scroller.

September 04, 2013

requestAnimationFrame is what you need for browser-based animations

Are you still using setInterval for animating things on your website?

Forget about it! This is what I discovered yesterday. I started to make a simple parallax scrolling and used setInterval initially. I was not much convinced with the result, because I encountered some flicker and minor slowdowns. So, I searched for hints for optimizing my experience and discovered there is some relatively new technique out there called requestAnimationFrame. Here is what I've done intially
function ParallaxScroller() {
    var that = this;    
    var layers = [];
            
    this.addLayer = function(layer){
        layers.push(layer);    
    }
    
    
    this.start(timeout){
        setInterval(render, timeout);
    }
    
    this.render=function(){        
        for(var i=0; i<layers.length; ++i){
            layers[i].update();
        }        
    }
};

var scroller = new ParallaxScroller();
scroller.addLayer( /* layer */ );
scroller.start(50);
And here is the same code using requestAnimationFrame
function ParallaxScroller() {
    var that = this;    
    var layers = [];
            
    this.addLayer = function(layer){
        layers.push(layer);    
    }
        
    this.render=function(){        
        for(var i=0; i<layers.length; ++i){
            layers[i].update();
        }                
        requestAnimationFrame(that.render); // # HERE!
    }
};

var scroller = new ParallaxScroller();
scroller.addLayer( /* layer */ );
scroller.render();
The thoughtful reader may have noticed, that there is no interval passed to requestAnimationFrame. To understand what is happening here, it is vital to know about the essential difference between requestAnimationFrame and setInterval:

Using setInterval for animations forces the browser to refresh the screen when the related callback is triggered. That means, screen updates occur at any time, and have a good chance to be unnecessary. As a result the animations can be crude. Using rFA instead overcomes those "update interruptions". As the name says it requests an update frame. So, the browser becomes responsible to decide whether a callback is triggered, or not. Yes, he might decide not to trigger, for example if the updating page is not visible. The browser can synchronize the callback with its own refreshing rate, which is usually 60Hz, to avoid unnecessary (and CPU costly) repaints. Consequently, you get smoother results due to synchronized refreshes and can safe CPU resource (and battery life), especially while the screen is not visible.

That's why I simply do not care about frame rates when animate my stuff on the screen using rFA. The animations are updated in the correct time slot nearly all 16 ms. There are other articles out in the wild which explain more precisely how it works, like "Better performance with requestAnimationFrame", CreativeJS requestAnimationFrame , and "Better JavaScript animations with requestAnimationFrame". Also interesting is "Microsoft's rFA Testdrive", that compares both methods visually.

August 30, 2013

Remote Access to qBitTorrent WebUI on ReadyNAS102

I own a ReadyNAS 102 on which I installed the BitTorrent Client qBittorrent. This client comes with a web interface and offers some nice features, like email notification and/or file copy on finished downloads. But the best is that it runs on the NAS 24/7 at relatively low energy consumption (about 35W). I thought it would be very cool to access the WebUI from my work or my phone, so that I could monitor and add new torrents remotely.

My network setup at home is quite simple. I have that common Telefônica Modem that is connected to my TP-Link WLAN-Router (WL1043ND). The router leads to an 8-Port-Switch, where my NAS is plugged in.

Setting up Dynamic DNS

First of all, I established an automatic update of the (dynamic) IP provided by my ISP in the DNS using NoIP. My Router already provides automatic update for services like DynDNS, NoIP and Comexe. As far I can see, DynDNS is not free (maybe it was) and Comexe is somewhat chinese. So, I just created an account at NoIP and chose a domain name. Afterwards I registered the account in the administration panel of my router and enabled the DDNS feature. After the activation I checked at NoIP, if the IP update was successful. Beleza, so far no probs.

Forwarding Port of Bittorrent Client

In a second step, I forwarded the port of the Torrent Client in my LAN to be accessible from the outside. I mapped the external port to the same of the internal port, i.e. 9000. Now, it is already possible to access the client from everywhere using the URL http://ohaeger.no-ip.biz:9000.

Securing the connection

My client is protected with a password, but using only HTTP is truly no protection, as the credentials are transferred in clear text. I need to secure my connection. The qBittorrent client already offers the option for encrypted connection via HTTPS, but I need to provide a key and certificate. I do not own a SSL certificate, so I need to create one - for free. To create such a certificate I use OpenSSL, which is usually already shipped with Linux Distributions. If you are on Windows you may download OpenSSL here. In the following I describe how to create such self-signed certificate manually. Much easier creation offers this webservice.

Create a private key

The first step (after eventual installation of OpenSSL) is creating a private key. Here, I am going to generate a 2048-Bit strong RSA key and store into file 'torrent.key'
>openssl genrsa -out torrent.key 2048
Generating RSA private key, 2048 bit long modulus
......................+++
................................................................................+++
e is 65537 (0x10001)

Create a Certificate Signing Request (CSR)

Now, I am going to create a CSR, which I could send to a Certificate Authority (CA), but I won't. For my purpose a self-signed certificate is sufficient. I pass the recently generate private key to the request. As a result I'll obtain 'torrent.csr'.
> openssl req -new -key torrent.key -out torrent.csr
Enter pass phrase for torrent.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:BR
State or Province Name (full name) [Some-State]:SP
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Devbutze
Organizational Unit Name (eg, section) []:
Common Name (eg, YOUR name) []:Oliver Hager
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Self-signing the certificate

As I do not verify the certificate by a CA, I sign the certificate by myself. The resulting file 'torrent.crt' will be used on Bittorrent client in the next step
>openssl x509 -req -days 365 -in torrent.csr -signkey torrent.key -out torrent.crt
Signature ok
subject=/C=BR/ST=SP/O=Devbutze/CN=Oliver Hager
Getting Private key

Setting certificate in qBittorrent

Finally, I enter the private key and the certificate in qBittorrent under Options->WebUi

Accessing qBittorrent securely

And last but not least I can access the client using HTTPS.

August 29, 2013

Using Prettify with Blogger.com

I tried to configure Blogger using Prettify.js and struggled a bit as it did not work as I expected.
I looked at one of my favorite knowledge sources and saw that using syntax highlighting in Blogger.com should be quite simple.

So, I started to follow the advices given at Stackoverflow.com and added/modified the following lines in the HTML of Devbutze.

<head>
    <link href="http://google-code-prettify.googlecode.com/svn/trunk/src/prettify.css" rel="stylesheet" type="text/css"/>
    <script src="http://google-code-prettify.googlecode.com/svn/trunk/src/prettify.js" type="text/javascript"></script>
...
</head>
<body onload="prettyPrint()">
...
</body>

It is worth notice - as mentioned also at Stackoverflow - that after correct configuration the to-be-highlighted code won't be colored while previewing. But even when I published my very, very first test post there was still that poor black and white code. WTF?

After some trials I discovered that the necessary correct execution of prettyPrint() failed for some reason at least in Chrome. With FF I had no problems and it worked like expected.
The solution, without talking too much, is very simple. Initially, I used the 'Dynamic' Template for the blog. When I changed to 'Simple' Template it worked like a charm. Well, that is not really a solution, but as I don't like the Dynamic Template that much, it serves as a solution at least for me.