Friday, May 09, 2008

Java RMI without name server

You know, Java RMI is cool, but sometimes that pesky name server just gets into the way. One starts the name server by typing something on the sorts of rmiregistry. Well, there are other ways, but that is very common.

Today I decided to tinker a little bit with Java RMI. My idea was to code some sort of channel, that would give me direct access to a remote object, as long as I know its address in the distributed system. I want the client to be able to get the object without having to do a lookup before, by just typing:

RemoteObjInterface roi = ImportChannel.imp(host, port);
roi.remoteMethod();

Notice that there is no type cast on this code. Actually, the type cast happens in ImportChannel, so, I am just sweeping the garbage under the carpet. The server, by its turn, should be very simple too. The code should be like:

RemoteObjImpl ro = new RemoteObjImpl();
ExportChannel ec = new ExportChannel(ro);
ec.export();

The code is available below. I am using the example application from these slides.

  • Server. The server application, that creates a remote object and exports it via the export channel given further below.

  • TimeClient. The client application. It uses an import channel to get a hold of the remote object, and then it calls a remote method on it.

  • Second. The remote object's interface.

  • SecondImpl. The implementation of the remote object.

  • SecondImpl_Stub. This is the stub for the remote object. We need a stub to call a remote method, you know. Of course, we do not really need the source code, just the bytecodes, but I am given the source here just for fun. To produce an stub, one can use rmic. This tool erases the source code, after generating the bytecodes. To keep the source, use the option -keep.

  • ServerThread. The server uses this thread to handle clients looking for the remote object.

  • ExportChannel. This is the interface that a server must use to make an object available to receive remote calls.

  • ImportChannel. This is the interface that the client must use to get a hold of a remote object.


If you download all files above, then running this example application is very easy. You will need two terminals. In the first, initialized the server by typing java Server. It will print the port in which the object is listening. To use the client, go to the other terminal and type java TimeClient port, where port is the port where the server is listening. In this example, I have set the server to start listening at port 20000.

Tuesday, April 29, 2008

Fortran plus MPI in OSX Tiger

Finally I've got the fortran/mpi stuff running in my macstation. The installation process is very complicated, and so I will be describing it here. First of all, the physics software that we must execute is OpenPIC2, and it must be installed once the running environment is set up.

We need a couple of things in order to run OpenPIC2: we need the fortran compilers up and running, and also an implementation of MPI (the official documents about MPI are available here). To install all that stuff, we'd better use some package installation manager. I recommend two packaging tools: MacPort and Fink. I had to install both, as I need fink to get g77, and I need port to get g95. You can download fink here, and Mac Ports here. Installing these programs in an apple computer is very simple.

Ok, now that we have the package managers up and running, it is time to install the compilers. Let's start with g95:

sudo port -v selfupdate
port -d install g95

This should install g95 in your machine. Actually, there is some other stuff that is pretty useful, and I guess we should install it right now. The first software is rpm, to compile the sources of openmpi, and the second is stow, which helps to organize the software installed.

sudo port -d install rpm
sudo port -d install stow

That should do with the port part of the installation routine. Now, let's get g77, and to do this, we use fink:

fink install g77

It is as easy as this. Now, you probably have two different compilers running in your machine. Fink puts stuff in /sw/bin. You'd better add it to your path. To add it permanently to your path, you can edit /etc/profile.
Good, good. Now comes the turing test for the faint of heart: installing openmpi from source. Actually, you can install it using port, but them it will not enable mpif90, which we need to compile stuff. You can get the source rpm from the openmpi site. I chose openmpi-1.2.6-1.src.rpm. Anyway, download it to your machine, and then do

rpm -i openmpi-1.2.6-1.src.rpm
cd /opt/local/src/macports/SOURCES
sudo tar -jxf openmpi-1.2.6.tar.bz2
cd openmpi-1.2.6
export CXX=g++
export CC=gcc
export F77=g77
export FC=g95
./configure
sudo make
sudo make install prefix=/usr/local/stow/openmpi-1.2.6
cd /usr/local/stow/
sudo stow openmpi-1.2.6

These commands should get openmpi running in your system. To test it, type mpif90 in your command prompt. If this does not report any error message, you can just throw some fireworks; otherwise, I recommending sitting and crying.

To compile the physics software, download this tar ball, and open it somewhere in your machine. You might have to edit your LAM.make file. The one that I have uses the following options:

MPIFC = mpif90
FC90 = g95
FC77 = g77
OPTS90 = -O3 -fno-second-underscore
OPTS77 = -O3 -fno-second-underscore
MOPTS =
LOPTS =
LEGACY =

In order to compile the first example, try:

make -f LAM.make lesopenpic2f77.out

You will probably get a lot of warnings, but do not let them scary you away. To run the program, try:

mpirun -n 2 esopenpic2

You will get a bunch or numbers. Do not ask me what are they about, but it looks right from my 600m distance.

Saturday, April 19, 2008

A little of Erlang

I have started doing some toy programs with Erlang. That is a functional, dynamically typed language that provides support for implementing parallel and distributed systems. It is very simple to write a small distributed application running on the same machine; however, I had problems to find information about this on the web. Thus, I decided to write this little tutorial which uses examples from the book Concurrent Programming in Erlang by Joe Armstrong.

This is the code of a server application, that allows users to make deposits, withdraws and enquires. The client application is given here. I tested these programs in a CentOS linux kernel 2.6.18-53.1.14.el5 running on a 64 bit Intel(R) Xeon(R) CPU. My interpreter is Erlang (BEAM) emulator version 5.5.2 [source] [64-bit] [async-threads:0].

To run these examples, open two command shells, that I shall call server shell and client shell. In the server shell, change to the directory that contains bank_server and start the erlang prompt:

erl -sname bank

Once in the erlang prompt, type the following commands:

c(bank_server.erl).
bank_server:start().

Likewise, start the command prompt in the client shell:

erl

In order to test if the server is reachable, in the client shell you can ping the server erlang node:

net:ping(bank@Tuvalu).

If you get pong as answer, then everything is fine. But, if there is any error, you will get the atom pang instead. I really think this is a bad interface for reporting errors, but, Alas, it is not my decision... Anyway, here, Tuvalu is the short name of the machine where I was running this example, and bank is the node name, determined in the command prompt when starting erlang in the server shell. Once in the erlang prompt, one can access the bank server using the interface provided by the client application. A simple example section is:

bank_client:deposit(fernando, 100).
bank_client:ask(fernando).
bank_client:withdraw(fernando, 50).
bank_client:ask(fernando).

Saturday, January 26, 2008

LLVM 2.2

Today I installed LLVM 2.2. The first test, compiling SPEC2000 using the extended linear scan algorithm produced this result, with this sequence of passes. Now, I will try to install the puzzle solver in this new version.

Sunday, December 09, 2007

LLVM 2.1

I will be trying to deploy the puzzle solver on LLVM 2.1. I checked the code, and it does not seem to be a very easy task. It has changed a lot since 1.9. After that, I will have to add instruction folding, and then comes the really difficult part: to make the puzzle solver retargetable... don't cry for me Argentina... just to warm up, here is the list of passes invoked by the original LLVM 2.1 with linear scan, and here a list of the passes invoked with the simple register allocator.

The classes of registers in x86, and the new register numbering, as defined by LLVM 2.1 is given here

Tuesday, September 25, 2007

Profiling LLVM

It is possible to use gprof to profile the LLVM tools such as llc and such. It is necessary to build LLVM with the profiling functions enabled. To do this, go to the LLVM root and type:

make -j2 ENABLE_PROFILING=1

That will build a 'Profile' directory containing all the binaries of LLVM tools. Now, every time you run llc it will produce the gmon.out file containing profiling information. This file can be read with gprof using the command line:

gprof $PATH_TO_LLC/llc > out.prof

Sunday, September 23, 2007

Jump tables

Jump tables are a nice idea, but the way that LLVM is producing them is hurting the pass to break critical edges. My solution, not quite democratic, was to forbid jump tables altogether. To do this, I had to edit the file SelectionDAG/SelectionDAGISel.cpp. The method visitSwitch is responsible for creating jump tables. I simply commented that code out.
Now, my SelectionDAGISel.cpp looks like this.