Mbed Ethernet connection

I just received my mbed module. This little ARM device is pretty cool, and the associated tools works quite nice. Of course they cost a lot of money, I received mine for free for the mbed contest. Really kool no ? ;)

After the classic blink test, I decided to go for a network test. But I don’t have any magnetic Ethernet module right now (In fact, I should have one, but I’m unable to find it). So let’s go for a magnetic less ! The doc on the mbed dedicated page say that should be fine. I decided to pull the RJ45 socket from an old broken WRT54.

The main issue is to figure out how to solder this RJ45 on a veroboard. Here comes the fun part, I remembered that radio-amateur use a technique called “dead bug soldering”. Check this guidelines from the NASA for examples.

I decided to give it a try :

Just glue the RJ45 on the veroboard and use a common wrapping technique : Not so bad ;)

The next step is to flash a network example to test.

That’s really fun, the mbed works pretty well. I secretly hopes that somebody will come with a mbed like with opensource hardware and software.

Enjoy wired networks ;)

Really cheap USB to TTL on Atmega : 1.70$

One of the most common way to interface a microcontroler to a computer used to be serial port. But right now, serial port have been replaced with USB on most computers. A common way to fix this issue is to use a USB to TTL converter or a USB to RS232 converter + MAX232. That’s fine but :

  • USB to TTL PCB cost a bit of money : you can find some on Ebay around 7€ (shipped) and 15$ on Sparfun !!!  That’s about 2 or 5 times the cost of the microcontoler !
  • USB to RS232 cost 1.70$ (shipped) but need some extra level shifting and doesn’t really feet on a PCB (need a DB9 connector …)

In fact, USB to RS232 is a mass product, and the cost is really low. I decided to order a couple of this, just to look if I can use this stuff on a PCB. So I bought a 1.70$ USB to RS232 on Ebay.

I decided to rip the plastic off the DB9 and discovered a really tiny PCB. I removed the DB9, and decided to pass this little PCB to a scope session. How the hell do they manage to do a USB to RS232 with only a couple of external components ? They is no big capacitor for level shifter  (remember  RS232 is a +12/-12v ) ? The answer is simple, they don’t !!

This device isn’t RS232 compliant at all, the signals on the DB9 are TTL compliant, but not RS232. The ouput is between 0/5V and the input can handle -12/+12V but works great with a 0/5V too. I simply removed used pads on one side and added a couple on pins.

Please note that RX pin is missing on this pix but needed of course. The next step : How can I use this with an AVR Atmega (I used a Atmega8 but any will do the trick). Serial connection on a micro is TTL like this board, but the TTL signal is just inverted. A “1″ on the RS232 side is a -12V and +5V on a TTL, and a 0 on the RS232 side is a + 12V and a 0v on the TTL. You can find all the information here.

In fact MAX232 do both level shitting and inverting, but as I’m to lazy to wire a MAX232 (and will destroy the cheap aspect of this hack), I decided to handle this by software. This mean, I won’t be able to use the Atmega serial builtin port but need to write some additional code, to do the RS232 encoding/decoding by hand. Let’s give it a try :

I simply put this on a verroboard, connect VCC to USB Vcc, GND, RX and TX  to random pins on the AVR and let’s go to RS232 software serial. This can be done easily in fact, and I managed to handle 19200bauds with the internal 8Mhz clock of the Atmega. Above you will find the popular uart_putc() and uart_getc() ..

 1 #define UART_TX	D,1
 2 #define UART_RX	D,2
 3 #define UART_DELAY	52 // 1/9600 = 104uS : 1/19200 = 52uS
 4
 5
 6 void uart_putc(char c)
 7 {
 8   uchar i;
 9   uchar temp;
10
11   // start
12   set_output(UART_TX);
13   _delay_us(UART_DELAY);
14   clr_output(UART_TX);
15
16   for(i=0;i<8;i++)
17   {
18     temp = c&1;
19     if (temp==0)
20       set_output(UART_TX);
21     else
22       clr_output(UART_TX);
23     _delay_us(UART_DELAY);
24
25      c = c >>1;
26   }
27
28   // stop
29   set_output(UART_TX);
30   _delay_us(UART_DELAY);
31   clr_output(UART_TX);
32
33   _delay_us(UART_DELAY);
34 }
35
36 uchar uart_getc()
37 {
38   uchar i;
39   uchar ib = 0;
40   uchar currentChar=0;
41
42   while (ib != 1)
43     ib = get_input(UART_RX);
44
45   _delay_us(UART_DELAY/2); // middle of the start bit
46   for(i=0;i<8;i++)
47     {
48       _delay_us(UART_DELAY);
49       ib = get_input(UART_RX);
50
51       if (ib ==0)
52 	currentChar |= 1<<i; // this is a 1
53     }
54   return currentChar;
55 }

Nothing more to say, this hack works really great, and I can now build a bunch of USB board without paying so much. The only drawback of this approach is that you can’t use an interrupt for the uart_getc() so you have deal with that in your code. Another approach would use a single transistor for the RX pin to make the RX compliant w/ the AVR serial builtin routine.

You can find the whole project C files + Makefile in a zip here. I think this little hack is really useful, so please send it to all to your DIYer friends, this can save them money, time …

// Enjoy cheap USB ? :)

Boosting IR remote video sender (Thomson VS360U)

In my home, I have a bad TV antenna, so we use only the cable receiver to watch TV. But I have two TV sets. I decided to buy a video sender a couple of months ago, but never managed to get it working nicely. I bought a Thomson VS360U video sender. This one is really cheap, 24 €, works on the 2.4Ghz for the audio/video and 433Mhz for remote.

At the first test, I discovered that the transceiver come with a couple of IR leds. I have to glue each IR led in front of each part of your equipment I want to drive. For me, the cable receiver, the DVD, the Dvico, and the AV amp .. I tried this, but that’s a mess, each led is soldered on a single string, and tend to move. Not really a nice experience. This is simple to crappy to be use.

I decided to mod it to be able to use a single IR led, with a better gain. The first step is to find the right place to place my mod. Just open the transceiver, locate the power supply (Vcc/Gnd) and the IR transistor. I was quite easy, the only trick is to solder the wire for the IR transistor just before the base resistor. Here is the result.

You can find a better pix, in the gallery. I used a scope to find the IR transistor, but this can be done without.

Let’s build a simple IR booster, that’s connect to this pins, and everything will be fine. I used an common BC547 but any common transistor will do the job.

The result :

As you can see, this is small. I placed this near my cable receiver and every is working nicely. I can now control every equipment (cable, DVD, Dvico) for my room without any lag, or IR lost signal.

I managed to fix this cheap video sender without to much effort, I’m happy. This kind of hack can be used in a couple video sender device. The hardest part is to find the IR transistor, the rest is simply the same.

Enjoy TV from bed ;)

From Python to Vala for 1wire monitoring w/ Munin

Recently I decided to daily switch my main computer off. This computer was usually on all time, and consume a lot of electricity. So, I switched to a really small computer for common task: ssh-server, wake on lan (for my main computer), VPN access and mail relay. This new computer consume 7watts but his specs are : Geode CPU at 300Mhz, 128Mb of RAM, and 40Go of HD. Yes, that’s really low, but far enough for attributed tasks. I randomly log on this for external to access all computer inside my home network.

The main issue here, is that I used my main computer to monitor an 1wire network of external, heating and rooms temperature. I used a small Arduino card and a couple of Python scripts to populate some munin graph, like this one:

As you can see on this graph, I use a reference temperature from Guipavas. This stuff is public, and I use the Weather.com for the info. All works fine for about an year now. But when I switched to my new little box (300Mhz..) the python script used to monitor the 1wire network and gather weather.com reference was a bit heavier than excepted for this little box.

I first thought to rewrite this in pure C, but having to deal w/ xml parsing (libxml) and Posix serial in C .. That’s the little story, I decided to rewrite this script (and other) in Vala. I will not dump the Vala introduction here, but to be short it’s a new language that produce C used by Gnome Desktop. The syntax tend to be a C# like, and it has a lot of libraries and doesn’t need the bloat of an interpreter (nor VM). My first test was to listen to the Arduino serial port.

public void run()
{
ser = new Serial.POSIX();
loop = new GLib.MainLoop();
ser.speed=Serial.Speed.B38400;
ser.received_data.connect(parseSerial);
loop.run();
}



I used a Serial.vala wrapper found on the net, this is simple and neat. Just added some string parsing, and I get my Arduino 1wire network working w/ Vala .. The next is the Weather.com parsing which will be covered in a future post.
To conclude, the Vala result is fine. The result binary is small 38KB, it has quite a lot of dependencies (libsoup,glib, pthread, gobject..) and consume more memory than my python script. Python interpreter + Elementtree (xml parsing) + pyserial eat around 8.9MB of RAM, while the my Vala code eat 12.3MB. But keep in mind that’s this is with all the shared libraries. So, if you use a couple of script like me, this memory isn’t a big deal, because it will be used across different process without any overhead.

In meantimes, the main difference between the two version is the speed, here come some results with the time command of the weather.com functions only (I dropped the serial IO stuff for this test) :

jkx@brick:~$ time python weather.py
Temp:    20
Pres:    1021.0 hPa
Wind:    19 km/s

real    0m2.105s
user    0m1.468s
sys     0m0.216s
jkx@brick:~$ time ./weather
Temp:    20 deg
Pres:    1021.0 hPa
Wind:    19 km/s

real    0m0.427s
user    0m0.084s
sys     0m0.032s


Ok, Python takes 4x the Vala time for the same stuff. Of course this piece of code isn’t exactly the same, and evolve an network access, but I tested this a couple of times, and the result is always ~ the same, so I decide to look closer, and found that despite Python interpreter load quite speedy, ElementTree + urllib2 take 1.35sec to import

I get it, this system has a really small CPU and importing libs from harddrive takes times .. which doesn’t occur with my Vala code, the binary is small, and all dependency are already loaded by the OS itself. To conclude, Python is still my favorite language but running python script on small system has an overhead which I must take care, and avoiding loading / unloading libs is the key. A single python process, with some script loaded will be a better choice. And for small custom apps used on this kind of system, Vala seems to be a good alternative.

// Enjoy the sun


Disable HAL in Xorg on Debian / Ubuntu

Ok, let’s go for another big issue on the road to build a complex distro .. Maintainers tend to include one feature after one .. and now Debian is getting closer to bloat ..

Anyway, sometime ago the HAL was introduced in Xorg. This allow you to hotplug mouse / keyboard … But if for a reason, your HAL is buggy .. you can’t use a keyboard or a mice in Xorg. That’s a bullshit ! I discover a bug in RAID + HAL, and HAL is now segfaulting on my computer .. so I need to get ride of this Xorg / HAL …

First you must modify /etc/X11/Xorg.conf with something like this :

Section "ServerFlags"
    Option "AutoAddDevices" "False"
    Option "AllowEmptyInput" "False"
EndSection

This disable the hal support, but if you want to have the keyboard and mice, you must install the following packages :

  • xserver-xorg-input-kbd
  • xserver-xorg-input-mouse

That’s it… no HAL support Xorg anymore, that works fine …

Howto resize a libvirt (kvm/qemu) disk image

I’m using kvm for a while at work. Everything works quite fine, but today I needed to grow a disk image. I found some informations, but none are really clear so here the result :

First create a empty image file .. with this command (don’t use dd,  qemu-img is really quicker than dd):

qemu-img create -f raw temp.img 10G

Next simply your image file + the temp one, in a biggest one ..

cat foo.img temp.img > bar.img

You will get a new image file which is 10G bigger than the original one .. Now you can boot your OS, and discover (via cfdisk for example), that your system has a additionnal 10G unused space .. So next step:

  • Just create a new partition, and mount it in the normal way
  • Boot your kvm OS from a ISO file containing Gparted

I tried the second approach, and used a ubuntu install to boot (using virt-manager, this is really easy to do). And resized the partition to my need .. simply reboot and “tada” :)

Enjoy disk ?

Howto use AVR Dragon Jtag on Linux (Avarice + avr-gdb +DDD)

I bought a couple of months ago a little AVR Dragon card. My initial plan was to use it for debuging programs with the embbeded JTAG. But I run into several issue with that, mainly because the lack of doc on this topic. So, here we are ;)

The AVR Dragon is nice because you can use it as a small developpement device without any other requirement: Simply drop the needed ATMega on the board, some little wrapping for : Jtag + power supply.

As you can see, this is compact and nothing else is needed. The power supply come from the USB port, and I soldered a DIP on the board.. and that’s it.

I use the Jtag connector, so now I can use a real debugger instead of playing with the UART. Simply put a breakpoint, and enjoy :) By this way, I figure out that most of the time I simply push some stuff in arrays, and inspect them with debugger. This is really efficient. For example, last week I need to fix a timing issue with a IR sensor, simply wrap the little board, and push all interrupts in a array with the related timing. Of course, this can be done with a serial connection too, but it will take more time, and even worst if you encounter a bug, you will have to find where is it (the UART printf, or the code itself) ..

So, how to use this with a Linux OS ?

First you need to use AVaRICE to program the ATMega with a command like this :

avarice -g -j usb --erase --program --file main.hex :4242

Here the result:

AVaRICE flash the hex file to the ATMega, and wait for a GDB connection on port 4242. GDB is fine, but not really visual ;)

Let’s take a look at DDD

To use DDD with avr-gdb (the gdb for AVR), you need to edit a config file, for example gdb.conf and put this in :

file main.out
target remote localhost:4242

And the final command, just launch DDD like this :

ddd --debugger "avr-gdb -x gdb.conf"

Next step: Simply place some breakpoint, and the press “Cont” inue button in DDD. Et voilà :

I hope this little tuto will help people looking for a nice AVR debuger for the AVR on Linux (or any OSS system). The AVR Dragon is definitively a must have for low budget user in AVR scene.

Enjoy bug ? :)

Nvidia 173.14 xrender benchmark

In a previous post, I looked closely the way nvidia binary driver works. In fact, like a lot of users I run into issues with firefox and other software which use Xrender extension to display stuff. A couple of day ago, Nvidia released a new version of its driver. They claim the future version fix the Xrender lag, so I decided to run it toward my previous bench results to see if current version change anything.

So the configuration is the same:

  • Nividia 173.14.12 kernel 2.6.24 and a Q6600

First, I need to say that in the default setting the new driver doesn’t work really nicely. It’s look even slower than previous in the default configuration. So for the first time on this bench serie, I tweaked the InitialPixmapPlacement and set it to 2. In my previous bench batch, doing this tweak products bad result so I disabled this option, but this time the drivers is so slow that without this tweak the benchmark would be useless.

Ok, let’s go for the results:

First, we can see clearly the new version is really better on some points : PictOpClear is the best result. We can see the nvidia team has really work on this, and the result even outperform the ATI driver. On the other side the PicOpt[Con|Dis]jointClear is still very hight.

For the rest of the test :

To things, on quite all the result the new driver is slower than the previous on (perhaps this is a InitialPixmap side effect), but the difference isn’t really big 0.5 sec on a test which is far from 0.5 at the end.. And the ATI still outperforms clearly the Nvidia here. In fact Nvidia driver’s team claim this primitives are never used (or should be). From what I know right now, some software use this primitive. It’s look like KDE (via QT) do. Apparently Nvidia team asked the KDE dev to change their code to achieve better result on Nvidia cards … Anyways this is perhaps not the best way, but we need to wait for KDE dev answer before going foward.

The second important thing is that PictOpConjointXor has now a 0 result.

As you can see on this benchmark, the new Nvidia driver seems to perform better than the previous one. On the user perspective, it’s look like the fixes applied for PictOpClear (and perhaps PictOpConjointXor) produce some great results. Right now Firefox perform nicely, and the whole desktop is fine. I’m quite sure their is still room for improvements (look at the open source Intel driver results for PictOpOver PictOpIn…, you will see binary drivers are far from the OSS results), but this release is the first for the 8xxxx serie which perform at a decent speed, and this is a good new.

Thanks again to my friends who send me their own results to compare, and to people on various forum that helped me on this stuff.

SMD Soldering on the cheap

Like a lot of hobbits, I don’t really like to solder SMD. It’s hard to solder with a normal soldering iron (even this can be done), and hot air soldering station cost a little. (around 180 Euros shipping included)

I already talk to my friend Bernt about this. He says he already used a cheap gas hot air gun with good result. Ok, he isn’t really an hobbist, and have a professional soldering station at work, but …

Tomorrow morning, he was waiting for me at my work, and gave me a gas hot air gun (plus additional soldering tip) ..what a great a present ! Thank you guy. In fact, they just started to sell this cheap stuff on their shop: Splashelec.

Fine, so it’s time to give it a try no ?

Hum, guess what, I don’t have any electronic flux at home. In fact this cost too much too ! (and you can’t stock it for a long time..) but I have some plumber paste on my desk. I use this stuff for wifi antenna not electronic but it should be fine.

Simply apply the paste with a brush and use the iron to melt it a bit. As I’m new to hot air soldering, I decided to use a normal iron to do that. Next step remove the surplus with some water. In fact, you can use a normal solder here, but using the paste is a bit easier to apply …

Ok ready to play ! Fire !

And the final step, place the chip on the board. Mount the heat blower on the gun and light it. Set it around the max temperature, and gently approach the chip pins. Don’t be afraid to take your time, chips are made for a reflow process, so they can handle hot air without too much issue.

Here’s the result

Fine no ? ;) The soldering is quite perfect, it’s my first time with an hot air soldering gun so.. but I’m really happy. It’s really easier than a normal soldering iron.

2D benchmarks on Linux Nvidia, Intel, ATI: xrender

For my new computer I bought a ATI HD 2600 PRO with a bunch of memory. This card has some really good 3D results, and works well on Linux. But I run into some issues with de xv extension on this board. In fact the driver (the free or binary one) doesn’t seems to support sync on vblank. So when a app try do display datas on the screen, some image destructions appear. This mainly occurs when I’m watching videos but in 3D games too. This is a really stupid bug or mis-feature. How can a serious video programmer can do that ?

After a couple of month, I decided it was enough, I was sick of this dirty stripes on screen. I tested every ATI driver one after one … (ATI opensource drivers have too bad performances to be used on a every days desktop, could you live without google-earth ? ) .. so I decided to go to other side, and bought a Nvidia 8600GT from ASUS. This card perform quite as the ATI in 3D, and have a affordable price. So I switched from ATI to Nvidia.

ATI offers better opensource support, but Nvidia binary driver is really nice to use and have better support today from stuffs like Compiz and Co.. and NO MORE STRIPES !! :)

A couple of weeks ago, I upgraded my Ubuntu Gusty to Hardy. Everything was Ok, since I played with firefox. Some heavy loaded pages (like Amazon, or Gmail) was damn sloowwwww ! Playing with scroll was a source …… grrrrrr ….Firefox on Hardy is 3.0b5. This version has a major “feature” the use a Xrender for the page display. And this looks like Xrender is dawn slow on Nvidia cards .. In fact, Nvidia has already work on this kind of issue before. Without looking forward I decided to run a little benchmark, with the help of friends with Xrenderbenchmark. So here the results.

Benchmarks

Benchmarks was done by me and 2 firends, on Q6600 or E6600 Intel CPU running at 2.4Gh, with kernel 2.6.24.1. The graphs only show the Plain results (not Plains + Alpha, or Transformation) but the results are quite the same anyways.

Legend:

  • 8600GT/nv : Nividia 8600GT / Xorg 1.4.1git 32 bits / Nvidia GPL driver
  • 8600GT/nvidia : Nividia 8600GT / Xorg 1.4.1git 32 bits / Nvidia binary driver ver: 169.12
  • 8600GT/nividia-64 : Nvidia 8600GT / Xorg 1.4.1git 64 bits / Nvidia binary driver ver: 169.12
  • Intel GMA X300 : Intel GMA 3000 / Xorg 1.4.0.9 64 bits / Intel GPL driver
  • ATI HD2600PRO : ATI HD 2600 Pro / Xorg 1.4.1git 64 bits / ATI GPL driver

I split the results into two graphs for convenience.

As you can see on this first part, numbers are really small, the Nvidia GPL driver is the worst : 5 times slower than any other one. Not a good news, and the binary one have some bad results on 2 tests. ATI HD and Nvidia drivers offer quite the same results, but remenber this is the GPL ATI driver ! … The Intel doesn’t have a lot of linearity on this part.

But the next graph give us absolutely different picture !

For every graph, Nvidia drivers (GPL, or binary, 32 or 64 bits) are at least 6 time slower. Intel perform very well, no surprise, this card are damn cool, perfect driver for linux.. but to slow in 3D to really rock. And ATI GPL driver is the clear winner of this benchmarks tests.

As my issue is the Nvidia one, I can comment the results, the GPL driver performs better than the binary one. This is not a big surprise cause, I can see it in Firefox, even it’s slow. There is difference between 64 bits and 32, but I guess this is more kernel related than the driver itself.

I’m not a video guru and only do that figure out what’s going on my computer. I publish in the hope it might help somebody else, and to find help.

Update : The numbers can be found here

Thanks to Ludo and Christian for their help !

Important update : Check the new driver results !!