mobile phone as modem

Sunday, October 26, 2008

There are a number of posts around the place on getting gprs or 3g cellphone connections working on linux. I wasn't interested in bluetooth as I have limited batteries and a usb cable. And you don't always know what your locla settings should be.

Here's all I did on Xubuntu Hardy:

  1. Plug the phone into the computer using the usb cable.
  2. In a terminal, type "sudo wvdialconf". (This sets up available baud rates, etc)
  3. In a terminal, type "sudo vi /etc/wvdial.conf".
  4. Edit this file with:
  5. phone = [for my nokia on Vodafone New Zealand, the number is "*99#". I phoned them to find out.]
  6. name = [the login name, if your mobile company requires it. Vfone NZ doesn't, but wvdial needed something here. I used "name"].
  7. password = [as with name. I used "password"].
  8. Save and close the file.
  9. Connect to the internet via your mobile by typing "wvdial" in a terminal window.
  10. When done, press ctrl-c in that window.
So easy and so great!

java modulus for floating point numbers

Thursday, October 09, 2008

The java modulus (or remainder) operator (%) truncates instead of rounds with floating point numbers (ie float, double), so the solution is to use Math.IEEEremainder instead. It's fine to use % for ints.

linux tools

Monday, September 29, 2008

What I use on xubuntu linux for specific tasks:....
















TaskToolDateUbuntu PackageComment
pdf annotatingflpsedSep 2008flpsedVery basic app that enables load/save of ps and pdf. Multi-line text can be added to the docs. No highlighting option or editing of existing content.
pdf editingpdfeditSep 2008pdfeditAllows editing of any part of multi-page PDF documents. Can be very slow to use on long documents. New content such as highlighting and single-line text can be added also.
vector graphics / single-page pdf annotatinginkscapeSep 2008inkscapeGreat tool for vector graphics, quick and easy results. Can import a single page of PDF documents.
xml viewing / formattingXML Copy EditorSep 2008xmlcopyeditorLoad and view xml. Validate form and against dtd etc, plus nicely format. Quick and easy.
diffmeldSep 2008meldExcellent visual diff/merge tool.
text/source editorSciTESep 2008SciTEExcellent text editor with awareness of many languages and markups. Tabbed view.

sql where not with nulls

Wednesday, September 24, 2008

Ah ha! Sound confusing? Why yes it is.

Turns out that SQL (and HQL for that matter) don't allow regular operators to run against a null field. Fair enough, it means:

SELECT * FROM myTable WHERE id=6

will evaluate to false if the id is null, so rows with null values won't be returned. Ok.

However, where it gets tricky is in NOT clauses, which also won't return if the value is null. If an operator (other than is null or is not null) is used on a null field, SQL evaluates it to null, which the where clause in turn evaluates to false. So...
SELECT * FROM myTable WHERE NOT id=6

will return rows where id is: 1, 2, 3, 4, 5, 7, 8 etc...
but NOT where id is: 6 or null

The way around it? If the field you are 'NOT' selecting on can be null, then you must include a 'is null' OR match:
SELECT * FROM myTable WHERE (NOT id=6 OR id is null)

javascript scripting tricks

Saturday, September 20, 2008

I often write little javascript scripts to get a job done on the web, and usually using greasemonkey. So, to keep track of the little useful tidbits that I tend to forget cos I use them so infrequently, I'll keep this post updated with them as I remember/come across them.

Adding elements to html



Adding a cell to a table row
var row=document.getElementById('myTable').rows[0]; //or however you find the row
var mycell=row.insertCell(2); //0-based index of the insertion position
mycell.innerHTML="NEW CELL"; //raw html content of the cell

Working with greasemonkey


Greasemonkey differs from writing a script that loads with the page, as actions that you define don't implicitly have access to the functions within the greasemonkey script. So setting up things like onclick won't work. My solution to this used to be to add a script element (defining my functions) to the page using addelement. While this worked, it was very messy and hard to maintain. Here's a much better solution, which involves using an event listener rather than just defining onclick (for example).
This is from consuming experience:
In the orignal html page
<div id="thediv">...</div>

So our greasemonkey script contains
document.getElementById('thediv').addEventListener('click', myFunction, true);

and then anywhere in the greasemonkey script we can define myFunction
function myFunction() {doStuff();...}

1 million people in a self-sustained bubble

Tuesday, August 26, 2008

Inhabitat has an article about a new concept from a Dubai environmental design firm, touted as a completely green, self-sustained mega-structure that could house a million people. A big leap from what we can comprehend right now, but could this be the way of the future, particularly for the developing world?

Wow

interface fulfillment using fields - a java language proposition - part 2 of 2

Friday, July 11, 2008

I'm proposing a new keyword in Java class definitions, via. It essentially provides a methodology to simply and maintainably perform automatic delegation of interface methods to member fields. Part one introduced the concept, while part two will delve into more of the finer points.

Inheritance model integrity


The integrity of the inheritance model is maintained, as a class using via can also subclass another class, eg:
public class Laptop extends Computer implements Chargeable via this.internalBattery

And this class can even be subclassed itself:
public class MacBook extends Laptop

As seen in Part one, the classes compile to a traditional POJO so inheritance is not adversely affected. Of course any class can only extend one class, but
via can wrap many classes.

An illustration: If obj.method() is invoked and Obj uses via
  • an implementation of method() defined by Obj is looked for first.

  • If not found, then tries Obj.super.method() and so on up to Object.method() when not found.

  • If not found in the object's hierarchy up to and including Object then its implemented interfaces are searched (and the method invoked on the corresponding field).

  • If Obj's interfaces don't define it then work up the inheritance tree again, checking for any interfaces fulfilled with via.

  • In the situation where two or more interfaces are mapped with via in the same class and define the same method, obviously the JVM wouldn't know which field to invoke the method on. In this case a compiler error will be thrown. The solution is to explicitly override the method in Obj.


IOC and DI


What about the IOC pattern? As far as I see it, via is an awesome tool to use with IOC patterns, such as the Spring Framework. The idea with the IOC pattern (or Dependency Injection (DI)) is that any implementation class that fulfils the required interface can be swapped in at runtime. As far as I know DI can't be used to inject in a class to be subclassed, as the inheritance is defined as the subclassing of a given concrete class. As we saw in Part one, via basically lets us subclass interfaces. This means that a given interface can have our extra functionality or handling (using via), but that interface can be any class, it's just whatever the IOC container passes in. Very cool.


Multi-class interface fulfilment


Part one mentioned the idea of fulfilling an interface by the combining of two or more classes. The problem this is trying to solve is basically that to implement one interface method might require the help of a number of private methods. This then encourages splitting up interfaces so that the implementation classes don't suffer from unreadability and complexity. The problem this creates is that the exposed interfaces then increase in number and each is more basic than it need be.
The idea is that an interface can be defined with as many methods as the pure design requires, without concern about the resulting complexity of any implementors. Using via, a single class can implement an interface and define a number of implementation classes that combine to fulfil it. Here's a full example:
// first two package-visible interfaces
interface PackageIface1 {
void setName(String name);
}

interface PackageIface2 {
void setNumber(int number);
}

// now the public interface that combines them
public interface PublicIface extends PackageIface1, PackageIface2 {}

// the implemetation classes (package-visible)
class PackageClass1 implements PackageIface1 {
public void setName(String name) {...}
}

class PackageClass2 implements PackageIface2 {
public void setNumber(int number) {...}
}

// and the integration class that brings them all together
public class PublicClass implements PublicIface via this.pkgIface1Obj & this.pkgIface2Obj {
// class just defines the two fields and constructor(s)
}

(Note: The '&' symbol is arbitrarily chosen and could be anything that makes sense and is acheivable).
So the PublicIface is the only interface that needs to be exposed publicly. The PublicClass will be a fixed implementation to use. Discrete functionality subsets of PublicIface can be changed by swapping in a different implementation of PackageIface1 or PackageIface2 that get passed to PublicClass's constructor. And there is no limit to the number of classes that can be used to fulfil the main interface, provided that each one exclusively implements an interface that is extended by the main interface.


I may be way off track, but I dreamed this up when coming across the same problem for the tenth time in an enterprise Java project using Spring, and it just seems to fit. It may be that there are techniques or patterns out there that mean I can do all this already, or just that I should be slapped for suggesting such things. All feedback welcome.

interface fulfillment using fields - a java language proposition - part 1 of 2

I'm proposing a new keyword in Java class definitions, via. It essentially provides a methodology to simply and maintainably perform automatic delegation of interface methods to member fields.

This provides:

  1. Encouragement for coding to interfaces over inheritance
  2. Object wrappers that do not need to subclass the wrapped object, yet still expose the wrapped object's methods directly (exposed as 'is-a' rather than having to manually delegate because of 'has-a')
  3. Can 'swap out' the effective superclass (like subclassing an interface, not a class)
  4. Advantages of multiple-inheritance
  5. and potentially: Interface fulfillment by combining two or more classes

Here's an example of what it might look like:
public class Car implements Driveable via this.vehicle {

Which says, I have a class, Car, which implements the interface Driveable. Car as an object definition may not fulfill all (or any) of the requirements for Driveable, but via its field 'vehicle', the contract is met.

And the code for Car might look like:
{
private Driveable vehicle;

public Car() {
this.vehicle = new DefaultCar();
}

public void doCarStuff() {...}
}

So we can see here that the requirements for the Driveable interface can be met by anything that can be assigned to the 'vehicle' field. In this case a new instance of DefaultCar. As with any class, Car can also define any other additional members as well (e.g. doCarStuff()), enriching the functionality of DefaultCar (as subclassing would). It is essentially a methodology for automatic and type-safe delegation. But we gain nothing in this example, we may as well just extend the DefaultCar class. To get the advantages we need to make some changes...
{
private Driveable vehicle;

public Car() {
this.vehicle = new DefaultCar();
}

public Car(Driveable driveableClass) {
this.vehicle = driveableClass;
}

public void doCarStuff() {...}
}

In this way Car's new functionality can enrich any implementation of Driveable. At runtime we can instantiate different instances of Car, each of which could have a unique implementation of Driveable. Swapping in different driveableClass objects could be useful for endowing different classes with the same extra features. Also for testing, Mocks or test doubles can be passed to the constructor so that only the added features are under test.

In this example Car could also override any of the Driveable methods, as with traditional class inheritance. More on overriding in Part two.


Under the hood


So how is it working? I imagine the compiler would generate bytecode representing the code below. You could write this yourself, but it would be messy and require duplication.
Firstly an example of the Driveable interface:
public interface Driveable {

void setSpeed(int speed);

boolean isMoving();
}


Then the effective resulting code (ie the developer wouldn't see it written like this):
public class Car implements Driveable via this.vehicle {
private Driveable vehicle;

public Car() {
this.vehicle = new DefaultCar();
}

public Car(Driveable driveableClass) {
this.vehicle = driveableClass;
}

public void doCarStuff() {...}

//delegation
public void setSpeed(int speed) {
this.vehicle.setSpeed(speed);
}

//delegation
public boolean isMoving() {
return this.vehicle.isMoving();
}



Multiple inheritance


While some debate the merits of multiple inheritance, this is often imitated by subclassing and using an inner class that has subclassed also. The via keyword, however, allows us to do this directly.
public class Car implements Driveable via this.vehicle,
Runnable via this.runner {


The construct would not break existing Java inheritance relationships, as a class can both extend a class as well as use interface fulfillment via fields, and a class that uses that construct can itself be subclassed.


More to follow...


More benefits and some finer points will follow in Part 2, including benefit [5] hinted at earlier...

debug sound in linux - first steps!!

Wednesday, July 09, 2008

My sound stopped after having worked for 6 months. I hadn't run any updates, and even tried booting into previous kernels to check, but that didn't help. Finally I discovered the answer, but not thanks to internet searching. Here are the requirements:

  1. If you are running linux
  2. If you have Windows on another partition you can dual boot into
  3. If sound works ok in Windows
and here is the first 2 steps:
  1. Check for a hardware mute switch and/or volume control on the machine (especially laptops) - this will affect Windows as well, so if it's currently working there then this won't be the problem
  2. Boot into Windows. Unmute the sound. Try linux again
So that was my problem. I had muted sound whilst in Windows, not realising that Windows can hardware-mute the sound card, something that linux doesn't have control over. So when re-booting into linux, it thought it was playing sound fine, but I could hear nada. Hope this helps someone :)

The silver-lining in this exercise in frustration was fixing my VLC performance. I had noticed for a few weeks that music and movies played in VLC would be silent for the first 10s or so. Running VLC in terminal gave complaints about pulseaudio not found.
Solution here for me was to go into the VLC preferences> Output Modules and check the box for advanced options. Change 'Audio Output Module' from default to 'ALSA audio output'.
The alternative option would be to install pulseaudio (sudo aptitude install pulseaudio). It seems pulseaudio is the new default sound server in Ubuntu and provides more advanced sound functionality, particularly when it comes to combining sounds, so this may be the best chouce.

greasemonkey for ff3rc1

Wednesday, June 11, 2008

Use greasemonkey and the latest firefox 3 releases as they come out? Well greasemonkey was only compatible up to ff3b5, but you can get a version (supposedly production-ready) from this link. Phew!

EDIT: Not required now as firefox3 is officially released.

opensource hardware in nz

Tuesday, May 27, 2008

Came across this site that stocks some really fun looking hardware. Of particular note is stocking only opensource-capable routers, laptops without an OS (no M$ tax!) and single-board computers that use only 5-7 watts that you could really have some fun with. Oh and who could pass up a GSM modem with built-in GPS? Oh, the gadgetry and geekyness.

nicegear.co.nz

jni - call C/C++/Assembly from Java

Friday, May 23, 2008

  1. Create Java class containing native method(s) (static or instance) defining interface with C code.
    public class Processor {
    public static native double process(double x, double y);
    }

  2. Compile to Java class file.

  3. Run
    javah -jni [-o path/to/CProcessor.h] Processor
    to generate C header file from the Java. Eg. CProcessor.h:
    /* DO NOT EDIT THIS FILE - it is machine generated */
    #include <jni.h>
    /* Header for class Processor */

    #ifndef _Included_Processor
    #define _Included_Processor
    #ifdef __cplusplus
    extern "C" {
    #endif
    /*
    * Class: Processor
    * Method: process
    * Signature: (DD)D
    */
    JNIEXPORT jdouble JNICALL Java_Processor_process
    (JNIEnv *, jclass, jdouble, jdouble);

    #ifdef __cplusplus
    }
    #endif
    #endif

  4. Create a .c C source file - this is the stub to call the target code. Include header and "Processor.h". The angle brackets mean it is registered as a library file for the compiler, whereas the speech marks denote a stand-alone header file. May need to include the path the Processor.h within the speech marks. Here is a simple example that just sums the two input numbers, rather than calling any other C:
    #include <jni.h>
    #include <math.h>
    #include "../headers/CProcessor.h"
    #include <stdio.h>
    JNIEXPORT jdouble JNICALL Java_Processor_process
    (JNIEnv *env , jclass obj, jdouble val1, jdouble val2) {
    return (val1+val2);
    }

  5. Run (eg for linux)
    gcc -shared -I /usr/lib/jvm/java-1.5.0-sun-1.5.0.15/include/ -I
    /usr/lib/jvm/java-1.5.0-sun-1.5.0.15/include/linux/ -o CProcessor.so CProcessor.c
    to create the library file (indicated by 'shared' flag). Any compiler errors about missing .h headers should be solved by the inclusion (-I) of the jni and jni for linux paths.

  6. Either in a method inside original Java class, or in a new calling class, create a static block to either load (System.load()) the .so library via its file path, or load (System.loadLibrary()) a registered library (e.g. dll on Windows) via system-specific addressing.

  7. Once the static block has loaded the library, the methods are available either statically or from an object as determined in 1.
    public class Caller {
    static {
    System.load("/home/me/_projects/JNI/C/CProcessor.so");
    }
    public static void main(String[] args) {
    double result = Processor.process(2, 3);
    System.out.println(result);
    }
    }
    In this case our Processor in Java calls the CProcessor in C which adds the 2 doubles we passed it and returns a double. Here it is 2 + 3 with the output 5.

printing from Windows virtual machine on linux host

Thursday, April 03, 2008

It's great to be able to use a printer that you have defined in your linux host from within virtualised Windows guest OS (where the printer is either connected directly to your box or over a network). This post assumes you've already setup networking between your guest and host.

Firstly in your printer settings on the host, make sure access hasn't been restricted (open access by default on Ubuntu).
Find the name of the printer that you want to access (ie not the Description which is what appears in network browsing)

Then on the virtual windows machine (for XP):
Choose Add Printer
Select network printer
Select Printer on Internet
For the url, use:

http://{host ip}:631/printers/{CUPS printer name}
It won't be able to install missing drivers if they're not found (ie can't load the linux drivers into Windows), so find the drivers from somewhere and select the Have Disk option to point Windows at the .inf file.
Should be away laughing!

This howto is has more direct CUPS fiddling than is needed for Ubuntu now, but I got key information from there to find out the above.

run a open-source virtual machine with qemu

I use the awesome qemu open-source processor emulator, that lets you run virtual OSs (guests) on a host computer (on my Xubuntu box, in my case). Qemu has the power to have guest os's think they're on a box with entirely different processor types (eg have a PowerPC guest on a x86 box). But what I've found awesome is that with the relatively new Kernel Based Virtual Machine (aka KVM) on linux, with a supported processor, virtualisation goes right down to the cpu level - so it's fast!

If you're on linux and want to check if your processor does support it, use:

egrep '^flags.*(vmx|svm)' /proc/cpuinfo

set ear file's context root in WebSphere

Thursday, March 27, 2008

WebSphere lets you set the context root of a war file directly on page 1 of the update screens. ear files are different however:
Turns out when Updating an ear, you need to select on the first screen to show all build/config options.
Then skip to step 8 'Map context roots for Web modules'. There the context can be specified.

managing networks without gnome

Wednesday, March 26, 2008

Having Xubuntu on my machine, gnome's Network Manager is an option, but I try to avoid the bloat of gnome. A great alternative (which many say is better) is wicd. It successfully scans for wireless networks, handles WEP connections, and is the only way I could find to connect to WPA/WPA2 networks. Great product. (howto)

keep grub boot switches between kernel updates

Thursday, March 06, 2008

I have been frustrated on numerous Linux systems where I have to use custom boot switches in Grub, so add them to the relevant entry in /boot/grub/menu.lst, but then a new kernel version (which adds a new boot entry) then doesn't have this new option. This has been simply annoying in the past because I've had to go back in and add the text to the entry. But now I've built a machine for a paying customer and I can't go round every time there's a kernel update!

It turns out the solution is simple:
Between the flags
### BEGIN AUTOMAGIC KERNELS LIST
and
## ## End Default Options ##
are settings that the updater will read in order to create the new boot entries.

The one we want is

Add text here and it will get added to any new boot list entries.


While I'm on the topic, another useful setting in this set is
which lets us reduce the number of kernels that remain as options in the boot list

lock hibernate session to avoid lazy loading exceptions in tests

Tuesday, March 04, 2008

//Lock this hibernate session so we don't get lazy loading exceptions
SessionFactory sessionFactory = (SessionFactory) appContext.getBean("sessionFactory");
Session session = SessionFactoryUtils.getSession(sessionFactory, true);
TransactionSynchronizationManager.bindResource(sessionFactory, new SessionHolder(session));

ssh tunnels

Thursday, January 17, 2008

Using ssh we can create tunnels to route bi-directional traffic. Depending on our selection of destination and server computers to connect to, the complete tunnel is made of a fully secure ssh tunnel, and possibly an additional tunnel continuing on that is not secure. The examples below should help explain this.
Here's the command:

ssh -NL localport:targethost:hostport server [-p serverport]

Where

  • -N means don't start a remote ssh session (useful if just port fowarding)
  • -L indicates a tunnel is to be set up with the following params:
    • localport: tunnel 'entry'. Port number to connect to locally (ie at localhost)
    • targethost: tunnel 'destination'. Which computer (name or ip) to connect to
    • hostport: tunnel 'exit'. Port number to appear as on the other side of the tunnel.
  • server: computer to validate ssh connection on (needs to be running ssh server). This forms part of the tunnel and three variations of this are explained below. You can make this 'user@server' in order to specify a username for the authentication if required.
  • serverport: this is optional, but if you want to create the ssh connection over a non-standard port (ie not 22) then use this. You'll need to get the sshd listening on the new port in this case.
Remember these funky things are 2-way so if whatever you're communicating with at the other end of the tunnel sends info back down the tunnel (via hostport) then you'll get it right back at your end (localport). It's cool.

Example 1, target computer is running ssh server:
ssh -NL 8081:192.168.1.4:8090 192.168.1.4
Create a tunnel from 8081 locally to 192.168.1.4, and appear there at port 8090. Validate at 192.168.1.4. This is a short tunnel, between just the client and host, but completely secure. All traffic in this case will be over port 22 (ssh default).

Example 2, host computer is running ssh server:
ssh -NL 8081:192.168.1.4:8090 localhost
Exactly as in example 1, only this time the local machine is both ssh server and client. There is effectively no secure tunnel here (ie it's only between ports on the local machine), so unsecure over port 8090 to 192.168.1.4.

Example 3, 3rd computer is running ssh server:
ssh -NL 8081:192.168.1.4:8090 192.168.1.5 -p 3389
In this case, locally we are the ssh client, connecting securely (over port 3389) to the ssh server, 192.168.1.5. From there traffic travels over port 8090 to the destination box, 192.168.1.4. This is very powerful as traffic can be securely routed into a limited-access network. For example in this case if Microsoft's Remote Desktop Protocol (RDP) was the only traffic allowed in and out (eg through VPN), then the client externally can run this command, connect through on the RDP port (3389), connecting to a box inside the network (192.168.1.5). From there the destination IP is resolved, so this needn't be visible from the client computer, and traffic flows freely inside the network over 8090.

Another option is remote tunnels (instead of local tunnels). These use essentially the same commands, but with -R in place of -L. With a remote tunnel it is possible to declare a port on a foreign machine as the tunnel entry.

A nice way to test your tunnel is to use netcat. For the above examples this test would be:

On 192.168.1.4:
nc -l -p 8090
listen (-l) on port (-p) 8090 - end of the tunnel

On localhost:
nc localhost 8081
connect to localhost on 8081 - start of the tunnel

The localhost connection should get tunneled to the remote box and you'll be able to type at either end and see it appear in the terminals.