Failing inserts with node-mongodb-native 1.3.9

Today after I upgraded the MongoDB connector for NodeJS to the latest version 1.3.9 I encountered strange results with our tests. The ones which use .insert() operations on collections suddenly started to fail because of timeouts.

Finally, after some good old caveman debugging I discovered that a callback passed to .insert() was not called anymore, even with increased timeout. I wanted to check via mongo shell whether the freeze happened before or after the actual DB operation but the shell just showed an error. Mongo’s logfile explained why: It had exited with a segmentation fault. Some web research suggested that this possibly could have to do with BSON and the native BSON parser, so I ended up with disabling the native parser which fixes the problem for the moment.

To connect to Mongo without the native parser one can use the connect() method of MongoClient:

  1. var MongoClient = require('mongodb').MongoClient;
  2. MongoClient.connect("mongodb://localhost:27017/database?",
  3.   { db: { native_parser: false } }, function(err, db) {
  4.     if (!err) {
  5.       // insert like hell  
  6.     }
  7.   }
  8. );

Up to know I’m not sure why the native parser is not working on my machine. Maybe there is something wrong with the build environment npm uses to compile the BSON module. However I don’t know yet how to fix that. If someone encountered the same problem and managed to fix it, please let me know.

LaTeX: Footnotes in Captions – on the right page

And there it was again: A moment of „Why can’t LaTeX simply do what I want it to?“.

I have figures which I cite. As they are licensed under CC BY 2.5, I thought of placing the reference as a footnote to the respective figure – which did not work as expected:

\begin{figure}
  \begin{center}
    \includegraphics[scale=0.75]{images/the_cited.png}
      \caption[The LOF caption]{Lorem ipsum. \tiny{This image is taken from somewhere and licensed under Creative Commons Attribution 2.5 license\footnote{Source: \url{http://www.example.com/the_image.png}}}}
    \label{fig:cited_img}
    \end{center}
\end{figure}

Finally, how it worked out was using the afterpage package as suggested here:

\afterpage{
  \begin{figure
}
    \begin{center}
      \includegraphics[scale=0.75]{images/the_cited.png}
        \caption[The LOF caption]{Lorem ipsum. \tiny{This image is taken from somewhere and licensed under Creative Commons Attribution 2.5 license\footnotemark}}
    \label{fig:cited_img}
    \end{center}
  \end{figure}
  \footnotetext{Source: \url{http://www.example.com/the_image.png}
}

A script’s tale – exploring V8

When I started to play around with V8 and tried to understand the basics about how it works internally, I felt pretty lost. The project’s homepage is rather high-level, tells how to make V8 run some „Hello world“ and how one could use it in an own application. But there is not much about the guts, so I started digging into them.

Overview

V8 is a JavaScript engine. As such, what it does is to take and evaluate JavaScript (resp. its ECMAScript implementation). Considering the limited set of built-ins alongside the primitives, this wouldn’t be that exciting. Especially with respect to (impossible) interaction with the outside world. But V8 provides an API which makes it interesting and e.g. led to the noteworthy Node.js framework.

JavaScript environment

To provide a runtime environment, V8 uses contexts. They hold the current state with the global object that owns all global variables. By binding custom object or function templates to the global object, one can extend the set of available objects and functions by arbitrary C++ code.

And this is how Chromium binds its DOM implementation to V8. For those curious about this templates should have a look at the /src/out/Debug/gen/webcore/bindings/V8 directory after building Chromium.

Script execution

Execution of a script is triggered by calling Compile and Run in the Script class. To provide an overview and navigation through what happens between this two calls is what this post is about.

Under the hood

A note/hint: I used the Eclipse CDT to explore V8 (I really learned to appreciate macro expansions, in-place overlaying of method declarations etc.) which are great for this purpose.

When V8 receives a script, it passes four stages: Parsing, AST creation, Compilation and Execution. V8 compiles to native machine code for the target platform, this is one reason for its fast performance.

Script::Compile is implemented in api.cc, and via calling Script::New triggers the compilation process:

// beginning in line 1560
Local<Script> Script::Compile(v8::Handle<String> source,
                              v8::ScriptOrigin* origin,
                              v8::ScriptData* pre_data,
                              v8::Handle<String> script_data) {
  i::Isolate* isolate = i::Isolate::Current();
  ON_BAILOUT(isolate, "v8::Script::Compile()", return Local<Script>());
  LOG_API(isolate, "Script::Compile");
  ENTER_V8(isolate);
  Local<Script> generic = New(source, origin, pre_data, script_data);
  if (generic.IsEmpty())
    return generic;
  i::Handle<i::Object> obj = Utils::OpenHandle(*generic);
  i::Handle<i::SharedFunctionInfo> function =
      i::Handle<i::SharedFunctionInfo>(i::SharedFunctionInfo::cast(*obj));
  i::Handle<i::JSFunction> result =
      isolate->factory()->NewFunctionFromSharedFunctionInfo(
          function,
          isolate->global_context());
  return Local<Script>(ToApi<Script>(result));
}

// beginning in line 1499
Local<Script> Script::New(v8::Handle<String> source,
                          v8::ScriptOrigin* origin,
                          v8::ScriptData* pre_data,
                          v8::Handle<String> script_data) {
  i::Isolate* isolate = i::Isolate::Current();
  // ...
  i::SharedFunctionInfo* raw_result = NULL;
  { i::HandleScope scope(isolate);
    // ...
    i::Handle<i::SharedFunctionInfo> result =
      i::Compiler::Compile(str,
                           name_obj,
                           line_offset,
                           column_offset,
                           NULL,
                           pre_data_impl,
                           Utils::OpenHandle(*script_data),
                           i::NOT_NATIVES_CODE);
    // ...
  }
  i::Handle<i::SharedFunctionInfo> result(raw_result, isolate);
  return Local<Script>(ToApi<Script>(result));
}

The compiler’s Compile implementation captures some statistics, looks up whether it already has a compiled version of it and if not hands over to MakeFunctionInfo:

// beginning in line 486
Handle<SharedFunctionInfo> Compiler::Compile(Handle<String> source,
                                             Handle<Object> script_name,
                                             int line_offset,
                                             int column_offset,
                                             v8::Extension* extension,
                                             ScriptDataImpl* pre_data,
                                             Handle<Object> script_data,
                                             NativesFlag natives) {
  // ...
  if (result.is_null()) {
    // ...
    result = MakeFunctionInfo(&info);
    if (extension == NULL && !result.is_null() && !result->dont_cache()) {
      compilation_cache->PutScript(source, result);
    }
  }
  // ...
  return result;
}

// beginning in line 372
static Handle<SharedFunctionInfo> MakeFunctionInfo(CompilationInfo* info) {

  // ...

  // Parsing
  if (!ParserApi::Parse(info, flags)) {
    return Handle<SharedFunctionInfo>::null();
  }

  // ...

  // Compilation
  if (!MakeCode(info)) {
    if (!isolate->has_pending_exception()) isolate->StackOverflow();
    return Handle<SharedFunctionInfo>::null();
  }

  // ...

  return result;
}

Parsing

The call to the parsing code ends up in parser.cc where a Parser object is prepared for its mission which will eventually start via ParseProgram:

// beginning in line 6034
bool ParserApi::Parse(CompilationInfo* info, int parsing_flags) {
  // ...
  if (info->is_lazy()) {
    ASSERT(!info->is_eval());
    Parser parser(info, parsing_flags, NULL, NULL);
    if (info->shared_info()->is_function()) {
      result = parser.ParseLazy();
    } else {
      result = parser.ParseProgram();
    }
  } else {
    ScriptDataImpl* pre_data = info->pre_parse_data();
    Parser parser(info, parsing_flags, info->extension(), pre_data);
    if (pre_data != NULL && pre_data->has_error()) {
      // ...
    } else {
      result = parser.ParseProgram();
    }
  }
  info->SetFunction(result);
  return (result != NULL);
}

The Parser’s implementation is also part of parser.cc. The methods ParseProgram and  ParseLazy can be found in the beginning of the file (currently around line nr. 530). They both somehow walk over the given source code and build an Abstract Syntax Tree from it which is stored in a particular AST node, a FunctionLiteral. That is, a script gets an enclosing function which returns the result of the last statement. Script::Run will execute this function.

The Abstract Syntax Tree

As parsing the code and building the AST are interleaved, there is not much to say about the AST’s creation here. Instead I’ll explain its building blocks. An AST consists of AstNodes which belong to one of five node types.

To get a first understanding of the basic structure of JavaScript programs, it is sufficient to focus on Statements and Expressions. Probably everyone (if not more) who ever dealt with the theory of programming language’s syntax and semantics knows these concepts. Statements drive the control flow while Expressions mainly cause data flow.

A FunctionLiteral

As said before, scripts are transformed into FunctionLiterals, which are the root of the according ASTs. A FunctionLiteral has a body which is a list of Statements which correspond to the actual statements of the source code but transformed into a tree structure.

For an assignment in the source, this means it becomes an ExpressionStatement wrapping an Expression of type Assignment. An Assignment itself has two further Expressions as children, the left-hand side target and the right-hand side value. The nesting may be arbitrarily deep until an Expression is reached that itself has no child anymore.

One can inspect generated ASTs using the --print-ast flag for the debug shell D8. After building V8 it can be invoked from the main directory by out/x64.debug/d8 --print-ast foo.js. The following example shows the AST for an factorial function:

function factorial(n){
    var result;
    if (n == 1 || n == 0)
        result = 1;
    else
        result = n*factorial(n - 1);
    return result;
}
FUNC
. NAME "factorial"
. INFERRED NAME ""
. PARAMS
. . VAR (mode = VAR) "n"
. DECLS
. . VAR (mode = VAR) "result"
// actual beginning of the body
. BLOCK INIT
. IF
. . OR
. . . EQ
. . . . VAR PROXY parameter[0] (mode = VAR) "n"
. . . . LITERAL 1
. . . EQ
. . . . VAR PROXY parameter[0] (mode = VAR) "n"
. . . . LITERAL 0
. THEN
. . ASSIGN
. . . VAR PROXY local[0] (mode = VAR) "result"
. . . LITERAL 1
. ELSE
. . ASSIGN
. . . VAR PROXY local[0] (mode = VAR) "result"
. . . MUL
. . . . VAR PROXY parameter[0] (mode = VAR) "n"
. . . . CALL
. . . . . VAR PROXY (mode = DYNAMIC_GLOBAL) "factorial"
. . . . . SUB
. . . . . . VAR PROXY parameter[0] (mode = VAR) "n"
. . . . . . LITERAL 1
. RETURN
. . VAR PROXY local[0] (mode = VAR) "result"

Compilation

When the AST is ready, the next stage begins with MakeCode back in compiler.cc:

static bool MakeCode(CompilationInfo* info) {
  // Precondition: code has been parsed.  Postcondition: the code field in
  // the compilation info is set if compilation succeeded.
  ASSERT(info->function() != NULL);
  return Rewriter::Rewrite(info) && Scope::Analyze(info) && GenerateCode(info);
}

Rewrite

From what I observed, Rewriter::Rewrite in rewriter.cc simply travels through the AST to insert Return statements if there are none explicitly defined (in this case the result of the last Statement is returned).

Analyze

In JavaScript, Scopes are used to bind and look up variables. Each function gets its own Scope. When a function is called, its scope is chained with the calling scope. To look up a variable, V8 travels down the scope chain until it finds the variable. This way, variable shadowing naturally is covered by the used data structures.

What I believe Scope::Analyze does, is to record as much information about the encountered variables as statically possible to support the compilation process, i.e. allow speed-ups in the generated code. There possibly is more behind that as V8 has sophisticated optimization techniques – see this wingolog atricles for more: 2 compilers, Crankshaft and Lithium.

GenerateCode

When a function is compiled for the first time, this is business of the FullCodegen compiler (throughout runtime, V8 may decide to optimize certain functions and this is where I definitely would refer you to wingolog). GenerateCode calls its FullCodeGenerator::MakeCode method and from there on it gets platform specific, because the called Generate method is included from the platform-specific directories, e.g. here for x64, at compile time.

// beginning in line 289
bool FullCodeGenerator::MakeCode(CompilationInfo* info) {
  // ...
  FullCodeGenerator cgen(&masm, info);
  cgen.Generate();
  //..
  return !code.is_null();
}

 

To find out, how V8 exactly assembles the machine code, have fun and go on digging. For me this was the end of the journey as my interest is focused on the platform-independent parts of V8.

Build Chromium in Eclipse with Ninja and Clang

Because the Chromium project’s wiki suggests to use Ninja to speed up the build process, I decided to give it a try. But as I’m working with Eclipse, I would be annoyed by having to switch between Eclipse and a console every time I want to recompile. Especially as it works fine with make. Combining some hints from different pages finally lead to an Eclipse build configuration which works just as convenient.

For this solution I assume that you already have a working make setup for Eclipse (build instructions, Eclipse setup).

Setup Ninja and Clang

Clang is not part of the Chromium checkout by default, thus you must, from the 'src' directory of your checkout, execute:
tools/clang/scripts/update.sh

Afterwards, gyp will recreate the build files for Ninja and Clang with the following command:
GYP_GENERATORS=ninja GYP_DEFINES=clang=1 ./build/gyp_chromium

Next, before starting the compilation with Ninja and Clang, this page suggests to remove the output directory to prevent confusion between Ninja and make:
rm -rf out/Debug

To compile with the new setup, invoke:
ninja -C out/Debug chrome

Make Eclipse using Ninja

If the steps above are taken successfully and Ninja builds from the console, it is time to adapt the build settings for Eclipse (I’m using 3.7).

Open the project properties and navigate to the ‚C/C++ Build‘ option. In the tab ‚Builder Settings‘, uncheck ‚Use default build command‘ and set ‚ninja -C out/Debug‚ as the build command. The build directory below should be set to ‚${workspace_loc:/chromium/src}‚, i.e. point to the src directory of the checkout, if this is not the case already.

Switch to tab ‚Behaviour‘ and uncheck ‚Use parallel build‘ – Ninja uses all cores by default – or set a number of jobs you like. If not checked, check ‚Build (Incremental build)‘ and in either case set it to ‚chrome‚.

With this settings, Eclipse now uses Ninja and Clang to build the project. If you encounter problems with the PATH variable, the Chromium wiki suggests to manually set it in Eclipse.

Remote desktop with dual monitors, Ubuntu (and ATI)

I have a Linux machine with only two cables connected: a power supply and a network cable. As I’m lacking a KVM switch that supports two DVI monitors, I need remote access. And the story would already be over here, if there wasn’t two monitors in the introduction.

To configure desktop forwarding from Linux to either Windows or another Linux can’t be too hard – at least I thought so, as there are x2go, NoMachine NX, Xming and some other tools around. If nothing works, there should still be the possibility to use an X11 terminal on another tty as last resort (story still not over, as there is ATI in the title).

Tools and dual monitors

To make it short: I couldn’t make any tool I tried – x2go (sad thing, I liked that one), NX, Xming, remmina, cygwin – to display a (usable) dual monitor desktop. However, they are all well suited for a single monitor or single applications (I would recommend Xming as ’ssh -X‘ for Windows).

The key aspects of my Linux desktop: Ubuntu 11.10, Gnome 3 (classic) with Compiz on a ATI HD 4850 graphics card.

So, remote X on tty8 then (XDMCP)

Although I first didn’t like the idea of having the remote desktop on another tty (in the end, pressing Strg+Alt+<Nr> is not that different from Alt+Tab), I started looking around for a tutorial that explains how to configure an X server as terminal server.

I’ve taken the basic steps from here, for Kubuntu 11.10 with KDE4 which is running on the remote machine, they are slightly different.

Adapt kdm settings

There are some changes that have to be done to the default kdm and X config. First remote xdmcp requests must be permitted:

  1. [Xdmcp]
  2. # is explicitly set to false by default
  3. Enable=true
  4. ...
  5. ...
  6.  
  7. [X-*-Core]
  8. # if you want to shutdown the machine without entering the root pass, set to 'All'
  9. AllowShutdown=All

Next, the machines that should be served must be specified by their IPs, host or domain names, wildcards are available:

  1. # remove the # before the asterisk and every machine can login
  2. #*     #any host can get a login window
  3. 192.168.1.42  # only allow this IP to connect
  4. ...
  5. # the same holds for the chooser setting
  6. #*     CHOOSER BROADCAST       #any indirect host can get a chooser
  7. 192.168.1.42   CHOOSER BROADCAST

Before restarting the remote machine, you should check whether listening for incoming  TCP requests is disabled by executing  ‚grep -r nolisten /etc/kde4/kdm/‚ – if you get something like

/etc/kde4/kdm/kdmrc:#-nolisten tcp

everything is fine (Maybe I’m getting old, I remember disabling this option but couldn’t find the lines I remembered).

Establish the remote connection

After rebooting the machine you can now connect to its X server by invoking X :1 -query <target ip address> as root. I use another tty for that so I don’t have a busy terminal on my desktop.

fglrx errors

The first attempt to connect to the remote machine resulted in two errros:

(EE) fglrx(0): incompatible kernel module detected - HW accelerated OpenGL will not work
(EE) fglrx(0): Not enough video memory to allocate CMM buffer (width = 64, height = 64, 
                                                                          alignment = 4096)

The first might have been due to the fact that the Plasma desktop was configured to use OpenGL. I changed that to xrender. This assigns graphical effects to the CPU, not perfect but works. In the same step I updated fglrx on my desktop with the most recent version from the AMD/ATI website. Finally I configured Gnome on my desktop to use Metacity as window manager instead of Compiz.

The last step probably didn’t contribute to the solution, but as I did all changes at once, I can’t tell for sure.

Security note

XDMCP exchanges unencrypted messages between X server and terminal. Hence it is no good idea to use it over networks with untrusted peers, e.g. the Internet, as this also implies that keyboard inputs (passwords, …) are transferred in clear-text.

X.509 Zertifikate für E-Mail-Adressen an der TU KL

An der TU Kaiserslautern bietet das RHRK mit der RHRK-PKI einen, soweit ich weiß, wenig bekannten Service an. Jeder Student der TU kann sich dort ein Zertifikat für seine Uni-E-Mail-Adressen beantragen.

Um ein Zertifikat zu beantragen, muss man einfach dieses Webformular ausfüllen. Danach meldet man sich bei Herrn Stemler in Raum 34-318 und weist sich aus, woraufhin er das beantragte Zertifikat signiert. Das Eintragen zusätzlicher Adressen war bei mir auf Anfrage kein Problem (neben @rhrk kann man auch @cs und @informatik aufnehmen lassen). Einen USB-Stick dabei zu haben kann nicht schaden, ich bin mir aber nicht mehr sicher, ob man den wirklich braucht.

Nachdem man dann die resultierende PKCS#12-Datei im Mailclient seiner Wahl importiert hat, kann man fortan fröhlich signierte Mails verschicken und verschlüsselte Empfangen. Weitersagen. 😉

Custom browser build for Chromium OS misses package xi

After setting up the build environment for Chromium OS, I wanted to integrate the Chromium browser such that I can do changes to it as well. How this should work is explained here. However, there was a problem with the gclient sync command in the last part of the first step (gclient Checkout Build).

I use a 64 bit Ubuntu 11.10 as development system, as suggested, and create 32 bit versions of Chromium. This seems to be a problem here. When I executed gclient sync, this lead to the following error message:

Updating projects from gyp files...
Package xi was not found in the pkg-config search path.
Perhaps you should add the directory containing `xi.pc'
to the PKG_CONFIG_PATH environment variable
No package 'xi' found

The quest for xi

First I searched the chroot of my Chromium OS build environment for this xi.pc file. There was one in /usr/lib64/pkgconfig/, unfortunately. So I simply need a 32 bit version of it, but where to find it?

Querying the web for package xi and „xi.pc didn’t yield something useful. After a long while I somehow found out that I’m actually looking for libxi, the X11 Input extension library (note to self: in such cases, „lib“ as a prefix might be not so bad at all).
After installing libxi in the chroot through
sudo emerge x11-libs/libXi
gclient sync finally worked as desired.

Run a custom Chromium OS build in Virtualbox

The developer documentation for the Chromium projects appears pretty detailed and the quick start guide is straight forward. Getting a first build didn’t took long – besides the build time of about 2 hours (X2 6000+, 4 GB RAM). To run the build I wanted to use a virtual machine as this allows to easily try out modifications.

Create an image for Virtualbox

The hard way

The build environment comes along with a script that converts a native image to images for Qemu, Virtualbox or VMware. But invoking the script with --format=virtualbox terminated with an error telling that the command VBoxManage couldn’t be found.

This is because the chroot of the build environment has no Virtualbox installed. This post suggests a way to install Virtualbox there. The same post also tells that the failed conversion could simply be done outside the chroot, so I chose that solution.

The image to pass as input parameter to VBoxManage can be found under src/build/images/x86-generic/latest/vm_temp_image.bin relative to the checkout directory. With this invoke
VBoxManage convertfromraw /path/to/input_image.bin /path/to/vbox_image.vdi

The simple way

As I found out later, Virtualbox also supports the VMDK format from VMware. So instead of the steps above you can simply call the script with --format=vmware and use the resulting image. Whether there is a drawback in terms of performance or whatsoever I don’t know. Please drop a comment if you know something about this.

Get the image running

Not really surprising, Chromium OS can only be set up if there is a working Internet connection available (and if you have a Google account to login with). In my case the network connection for the virtual machine didn’t work right away. The machine didn’t recognize it.

What helped was to change the adapter type for the virtual network interface to Intel PRO/1000 MT Desktop in the network settings for the VM. I also heard and read of troubles with NAT networking. I didn’t encounter any so far, but switching to bridged networking is said to help in this case.

Studienplan für Master Angewandte Informatik

Seit heute ist auf der Studienplaner-Seite eine zusätzliche Vorlage zu finden. Dank Steffen gibt es nun eine Vorlage für den Master Angewandte Informatik.

Muon Software Center crashes on fresh Kubuntu

I’ve just installed a fresh 64-bit Kubuntu (11.10) and then wanted to use the packet manager/software center, which now is Muon by default, to add all the stuff I need.

Muon repeatedly crashed with a Segfault when I tried to launch it. Thankfully this problem had been around for some time, so it didn’t took me too long to find a solution which worked for me:

In this thread on ubuntuforums.org there are several fixes suggested of which this one worked for me (disabling proprietary drivers didn’t help), which simply requires to uncomment two entries in ‚/etc/apt/sources.list‚ and invoke an ‚apt-get update‚ afterwards. The respective entries are:

deb http://archive.canonical.com/ubuntu oneiric partner
deb-src http://archive.canonical.com/ubuntu oneiric partner