Category Archives: OS

Latest megacli 8.07.14 emits message “Configure Adapter Failed” on Perc5i (LSI MegaSAS 8408E)

This morning I came to reconfigure a raid array on an old Dell Perc5i with the LSI MegaRAID CLI tool megacli, and whilst displaying information and removing an old LD appeared to work fine, I was greeted with the following when trying to add a new LD:

# megacli -CfgSpanAdd -r10 -Array0[10:2,10:3] Array1[10:6,10:7] WB RA Direct CachedBadBBU -a0

Adapter 0: Configure Adapter Failed

Exit Code: 0x03

Balls. The “Exit Code: 0x03” is supposed to mean “Input parameters are invalid” (ref), but after a few moments of head scratching and checking my parameters I realised that shouldn’t have been it. The megacli package I’m using comes from the debian repository at hwraid.le-vert.net and always worked in the past. A quick check of their homepage reveals a news item on their front page stating.

2014/01/26 — I just updated megacli to release 8.07.14. Despite it seems to works for me, I’d really appreciate some feedbacks, especially if you’re running a 32 bits system. Please drop me a mail !

So something in the new version isn’t 100% compatible with the Perc5i, I’ll send them an email, but I needed to get the adapter configured and wasn’t too keen to trudge off to the server room. After a quick google search I managed to find a rather old v4.00.16 rpm package in an archive here here and pull out the amd64 binary with rpm2cpio:

$ wget http://docs.avagotech.com/docs-and-downloads/legacy-raid-controllers/legacy-raid-controllers-common-files/4-00-16_Linux_MegaCli.zip
$ unzip 4.00.16_Linux_MegaCli.zip
$ unzip MegaCliLin.zip
$ rpm2cpio MegaCli-4.00.16-1.i386.rpm | cpio -idmv
$ cd opt/MegaRAID/MegaCli/
# ./MegaCli64 -CfgSpanAdd -r10 -Array0[10:2,10:3] Array1[10:6,10:7] WB RA Direct CachedBadBBU -a0

Adapter 0: Created VD 1

Adapter 0: Configured the Adapter!!

Exit Code: 0x00

Success!

Edit: 2014/10/16 12:30 The package here here is much more up to date and also works for me with the Perc5i, however it’s not a trustworthy link and I had to do a little more diddling to get it to run on Wheezy:

# apt-get install libsysfs2
# ln -s /lib/x86_64-linux-gnu/libsysfs.so.2.0.1 /lib/x86_64-linux-gnu/libsysfs.so.2.0.2

Spatial capabilities in PostgreSQL with PostGIS

When dealing with geographic data in PostgreSQL, at some point we’re going to want to lookup rows based purely on their location relative to one another.

With smaller volumes of data, we can get away quite easily with just latitude / longitude and some maths, but once we get past a certain point we’re going to want to be able to do index lookups based on distance from a specific location; this is where the PostGIS extension is priceless.

Installing PostGIS on Debian Wheezy

Installing under Debian when PostgreSQL has been installed via the package manager is dead simple. I’m using the 2.1 version packaged for a 9.3 server from the apt.postgresql.org repository, but you should select the package that matches your server version. It’s also possible to install version 1.5 directly from Wheezy main if you don’t want to add other repositories to apt.

$ sudo apt-get install postgresql-9.3-postgis-2.1

This will install the PostGIS contrib modules into the /usr/share/postgresql/9.3/contrib directory rather than the “extension” directory used by most other PostgreSQL packages in Debian.

Some of the machines I look after have PostgreSQL installed from source for various reasons (but mainly because historically packages from Debain weren’t very timely), and these machines require a slightly more lengthy setup process. To compile PostGIS from source we need to install some dependencies, one of which is the development files for the geospatial data abstraction library and these are in the “libgdal-dev” package; however this depends on the “libpq-dev” package which will most likely interfere with our source install. There are two options here; either install the package without it’s dependencies (make a note you’ve done it to avoid future dependency problems), or roll gdal from source:

First we install the required dependencies, these are listed in the PostGIS docs and for my test machine that already has everything required to compile PostgreSQL already these are:

$ sudo apt-get install libgeos-dev libproj-dev libjson-c-dev libjson0-dev

Then install the gdal dev libraries in whichever manner suits:

$ sudo apt-get download libgdal-dev
$ sudo dpkg --force-all -i libgdal-dev_1.9.0-3.1_amd64.deb

OR

$ cd /tmp
$ wget http://download.osgeo.org/gdal/1.10.1/gdal-1.10.1.tar.gz
$ tar xvfz gdal-1.10.1.tar.gz && cd gdal-1.10.1
$ ./configure
$ make
$ sudo make install

Once this is done we can compile and install PostGIS:

$ cd /tmp
$ wget http://download.osgeo.org/postgis/source/postgis-2.1.1.tar.gz
$ tar xvfz postgis-2.1.1.tar.gz && cd postgis-2.1.1
$ ./configure
$ make				
$ sudo make install

Getting started with PostGIS

Now we’ve got PostGIS installed we just need to create the extension in the database:

$ psql -U glyn -d test -c 'CREATE EXTENSION postgis; CREATE EXTENSION postgis_topology;'

OR on and on 9.0 and lower run the sql scripts in the contrib/postgis-2.1 directory:

$ psql -U glyn -d test -f postgis.sql
$ psql -U glyn -d test -f postgis_comments.sql
$ psql -U glyn -d test -f spatial_ref_sys.sql
$ psql -U glyn -d test -f rtpostgis.sql
$ psql -U glyn -d test -f raster_comments.sql
$ psql -U glyn -d test -f topology.sql
$ psql -U glyn -d test -f topology_comments.sql

So let’s generate some data for testing; we create a table called “friends” with 90k rows which stores their locations as latitude and longitude values. Admittedly the distribution in this table won’t be that realistic, but it should suffice for testing:

INSERT INTO friends
SELECT b.unnest || ' v.' || generate_series,
    CASE WHEN random() > 0.5 THEN 'Somewhere Else' ELSE 'Somewhere' END,
    1.0838637+random()*(CASE WHEN random() > 0.5 THEN -1 ELSE 1 END),
    52.7389201+random()*(CASE WHEN random() > 0.5 THEN -1 ELSE 1 END)
FROM generate_series(1,10000)
CROSS JOIN (SELECT unnest(ARRAY['White Wonder','Colonel K','El Loco','Count Duckula','Leatherhead','Barron Greenback','Ernest Penfold','Professor Heinrich Von Squawkencluck','Flying Officer Buggles Pigeon'])) b;

In the scenario where we don’t have PostGIS we can create an sql function to calculate earth distance between two points, but any relative distances will always be an unknown so can’t be indexed:

CREATE OR REPLACE FUNCTION earth_distance_miles(lat1 float, lat2 float, long1 float, long2 float) 
RETURNS double precision
AS 
    'SELECT 3963.0*acos(sin($1/57.2958)*sin($2/57.2958)+cos($1/57.2958)*cos($2/57.2958)*cos(($4/57.2958)-($3/57.2958)));'
LANGUAGE SQL IMMUTABLE;

This makes listing out “friends” within a mile pretty easy:

EXPLAIN ANALYZE SELECT f.*, 
    earth_distance_miles(52.7389201, f.latitude,  1.0838637, f.longitude) AS dist_miles
FROM friends f WHERE earth_distance_miles(52.7389201, f.latitude,  1.0838637, f.longitude) <= 1
ORDER BY earth_distance_miles(52.7389201, f.latitude,  1.0838637, f.longitude);
                                           QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=10988.40..11063.40 rows=30000 width=108) (actual time=160.006..160.013 rows=29 loops=1)
   Sort Key: ((3963::double precision * acos(((0.795884736186082::double precision * sin((latitude / 57.2958::double precision))) 
   	+ ((0.605448170123596::double precision * cos((latitude / 57.2958::double precision))) * cos(((longitude / 57.2958::double precision) 
   	- 0.0189169834438126::double precision)))))))
   Sort Method:  quicksort  Memory: 30kB
   ->  Seq Scan on friends f  (cost=0.00..7510.00 rows=30000 width=108) (actual time=19.993..159.930 rows=29 loops=1)
         Filter: ((3963::double precision * acos(((0.795884736186082::double precision * sin((latitude / 57.2958::double precision))) 
         	+ ((0.605448170123596::double precision * cos((latitude / 57.2958::double precision))) 
         	* cos(((longitude / 57.2958::double precision) - 0.0189169834438126::double precision)))))) <= 1::double precision)
 Total runtime: 160.069 ms

Now lets put PostGIS to work and add in an indexed geography column:

ALTER TABLE friends ADD COLUMN geog geography(Point,4326); -- SRID 4326 for WGS84
UPDATE friends SET geog = ST_MakePoint(longitude, latitude);
CREATE INDEX ON friends  USING GIST (geog);

Now let’s try to list out our “friends” within a mile again, this time making use of the PostGIS ST_Distance and ST_DWithin functions:

EXPLAIN ANALYZE SELECT f.*, 
    ST_Distance(f.geog, ST_MakePoint(1.0838637, 52.7389201))/1609 AS dist_miles
FROM friends f WHERE ST_DWithin(f.geog, ST_MakePoint(1.0838637, 52.7389201), 1609)
ORDER BY ST_Distance(f.geog, ST_MakePoint(1.0838637, 52.7389201));
                                           QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=37.90..37.90 rows=1 width=108) (actual time=1.756..1.760 rows=29 loops=1)
   Sort Key: (_st_distance(geog, '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography, 0::double precision, true))
   Sort Method:  quicksort  Memory: 32kB
   ->  Bitmap Heap Scan on friends f  (cost=2.39..37.89 rows=1 width=108) (actual time=0.908..1.684 rows=29 loops=1)
         Recheck Cond: (geog && '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography)
         Filter: (('0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography && _st_expand(geog, 1609::double precision)) 
         	AND _st_dwithin(geog, '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography, 1609::double precision, true))
         ->  Bitmap Index Scan on friends_geog_idx  (cost=0.00..2.39 rows=16 width=0) (actual time=0.351..0.351 rows=45 loops=1)
               Index Cond: (geog && '0101000020E610000017258D768157F13FB4ED0FEF945E4A40'::geography)
 Total runtime: 1.821 ms

This shows a marked improvement from a query time of 160.069ms down to 1.821 ms. Obviously our mileage will vary depending on the quantity of data in the table, it’s distribution and just how many rows we want to retrieve.

PostgreSQL Latin1 database with ASP.Net Unicode and DOS codepage 850 applications

So the title sums it up without going into detail; we basically have a legacy DOS application that had its underlying dataflies migrated into a latin1 encoded PostgreSQL database with no form of intelligent mapping (i.e. the cp850 “ú” character which is codepoint 163 is seen as the character “£” in latin1).

Converting to UTF8 at this point would have saved lots of hassle, however at the time of conversion the existing data wouldn’t easily go into utf8 without further intermediate conversion. Even then UTF8 wouldn’t be a magic bullet; we’d still have to sanitize the incoming data so we didn’t store anything the DOS side (or any other equipment) couldn’t read.

Fast forward a few years and now there’s two main apps hitting the database; an ASP.Net application and the aforementioned legacy DOS system. Both are heavily used and the DOS system isn’t going anywhere so we need a way for both systems to get along; crucially the DOS application must be able to understand everything the ASP.Net application puts in the database, and the ASP.Net application must be able to display the DOS codepage correctly in UTF8.

So how do we do this? All my attempts using the PostgreSQL encode/decode functions failed due to there being no cp850 representation, and although I’m sure it’s possible to do the conversion in sql using something like translate / convert and a large mapping string I don’t have time to fiddle with it right now. However perls encode and decode functions do have the ability to convert from cp850, so one solution from within PostgreSQL is to write a set of untrusted pl/perl functions like:

CREATE OR REPLACE FUNCTION cp850_to_utf8(text)
RETURNS text AS
$BODY$
	use Encode;
       	return encode( 'iso-8859-1', decode('cp850', $_[0] ));
$BODY$
LANGUAGE 'plperlu' IMMUTABLE;

CREATE OR REPLACE FUNCTION utf8_to_cp850(text)
RETURNS text AS
$BODY$
	use Encode;
        return encode( 'cp850', decode('iso-8859-1', $_[0] ));
$BODY$
LANGUAGE 'plperlu' IMMUTABLE;

For my current problem this means if either application writes something like “Straße” the other can read back the same representation.

From the ASP.Net side we can do the conversion outside the database with a similar process. Converting text to go into the database involves switching the encoding first from unicode into cp850 then back into latin1 to get the correct representation on the DOS side:

private static Encoding _eCp850 = Encoding.GetEncoding(850);
private static Encoding _eUnicode = Encoding.UTF8;
private static Encoding _eLatin1 = Encoding.GetEncoding("ISO-8859-1");

public static string encode_cp850(string sText)
{
	string sReturn;
	byte[] bSource;
	byte[] bTarget;

	bSource = _eUnicode.GetBytes(sText);
	bTarget = Encoding.Convert(_eUnicode, _eCp850, bSource);
	sReturn = _eLatin1.GetString(bTarget);

	return sReturn;
}

Reading is a little trickier though, as all strings in .Net are UTF8 (and actually stored as UTF16 internally) so once our string is read back into a string we can’t easily do our conversion as our initial representation has been mangled. However if we read our string back from the database as bytea into a byte array we can successfully convert it into the correct UTF8 representation:

public static string decode_cp850(byte[] sTextAsBytea)
{
	string sReturn;
	byte[] bSource = sTextAsBytea;
	byte[] bTarget;

	bTarget = Encoding.Convert(_eCp850, _eUnicode, bSource);
	sReturn = _eUnicode.GetString(bTarget);

	return sReturn;
}
        

I’m still wondering if there is a better way (I know, I know, yes it’s to have everything in UTF8), but for now this seems to manage ok for us.

I’ve uploaded some scripts to my git repository at postgresql/encoding_woes (mirrored also on github glynastill/encoding_woes)