Last year I wrote an application to highlight media outlets and their reach (coverage of media outlets), selecting regions within the UK and highlighting aspects of a map. This had many issues where by hitting performance problems of rendering within browsers and also limitations of converting KML to tiles via google. A list of these limitations are:

  1. Timeouts from google on large KML files.
  2. Responsiveness of servers to deliver KML files to google.
  3. Max KML size (Even when gzipped)
  4. 500 Errors from google
  5. Transparency within IE
  6. ….

Some of these limits have since been increased by google and are documented.

Maximum fetched file size (raw KML, raw GeoRSS, or compressed KMZ) 3MB
Maximum uncompressed KML file size 10MB
Maximum number of Network Links 10
Maximum number of total document-wide features 1,000

In order to alleviate these issues I ended up with the following

  • Caching KML files to avoid latency on a expensive database lookups/response.
  • Chunking the response into 250 records and writing to individual static KML files. (Files would become very large and google would time out retrieving data sets).
  • Proxying googles tiles after they had been converted from KML to images and caching them locally on our servers and then applying the overlays from our servers once merged

So depending on the depth (zoom) of the map and the area selected as well the volume of data, it would either use tiles or googles KML directly (Increased functionality).

In order to have greater control over the spatial data within our database we split this into areas, regions, and sub_regions, which held lookups to postcodes, towns and spatial data itself (There are a lot of discrepancies over outlines of maps).

Left hand menu:

<ul style="display: block;">
	<li id="East"><a href="#" onclick="loadTilesFromGeoXML('|1|'); return false;">East</a>
		<ul style="display: none;">
			<li><a href="#" onclick="loadTilesFromGeoXML('|1|6'); return false;">Bedfordshire</a></li>
			<li><a href="#" onclick="loadTilesFromGeoXML('|1|18'); return false;">Cambridgeshire</a></li>
			...
		</ul>
	</li>
</ul>

Javascript to locate tiles

  function loadTilesFromGeoXML(entity_id) {
    // Matches database record ids that are mapped to spatial data within MySQL
    mapTownsId = entity_id.toString().split('|')[0];
    mapRegionsId = entity_id.toString().split('|')[1];
    mapSubRegionsId = entity_id.toString().split('|')[2];
    locationUrl ='map_towns_id='+mapTownsId+'&map_regions_id='+mapRegionsId+'&map_sub_regions_id='+mapSubRegionsId;

    var cc = map.fromLatLngToDivPixel(map.getCenter());
    map.setZoom(1);

    // Request URL to cached titles links
    geoXMLUrl = '/ajax/mapping/get/overlays/region?'+locationUrl;
    geoXMLUrl+='&format=JSON&method=getLinks&x='+cc.x+'&y='+cc.y+'&zoom='+map.getZoom();

    // tileUrlTemplate: 'http://domain.com/maps/proxy/regions/?url=http%3A%2F%2Fdomain.com/ajax/mapping/get/cache/?filename=.1.6.0&x={X}&y={Y}&zoom={Z}',

    $.getJSON(geoXMLUrl, function(data) {
      $.each(data, function(i,link) {
        kmlLinks+=encodeURIComponent(link)+',';
      });

      // Builds the location for tiles to be mapped
      tileUrlTemplate = '/maps/proxy/regions/?url='+kmlLinks+'&x={X}&y={Y}&zoom={Z}';
      var tileLayerOverlay = new GTileLayerOverlay(
        new GTileLayer(null, null, null, {
          tileUrlTemplate: tileUrlTemplate,
          isPng:true,
          opacity:1.0
        })
      );
      if (debug) GLog.writeUrl('/maps/proxy/regions/?url='+kmlLinks+'&x={X}&y={Y}&zoom={Z}');
      map.addOverlay(tileLayerOverlay);
    });
  }

Response whilst retrieving links (if cached)

The code behind this simply caches the KML files, if it does not exist, otherwise attempts to create it and also outputs a json request with the files matching the sequence and globs for any files with a similar pattern, all files are suffixed with their page number.

["/ajax/mapping/get/cache/?filename=.1..0&x=250&y=225&zoom=5","/ajax/mapping/get/cache/?filename=.1..1&x=250&y=225&zoom=5"]

Proxying googles tiles and merging the layer ids

    $kmlUrls = urlencode($_GET['url']);
    $cachePath = dirname(__FILE__).'/cache.maps/tiles/';

    $cachedFiles = array_filter(explode(',',rawurldecode($kmlUrls)));
    $hash = sha1(rawurldecode($kmlUrls).".w{$_GET['w']}.h{$_GET['h']}.x{$_GET['x']}.y{$_GET['y']}.{$_GET['zoom']}");
    $cachePath.="{$_GET['x']}.{$_GET['y']}/{$_GET['zoom']}/";
    if (!is_dir($cachePath)) {
      @mkdir($cachePath, 0777, true);
    }

    // Returns image if cached already and aggregated.
    if (file_exists($path = $cachePath.$hash)) {
      header('Content-Type: image/png');
      $fp = fopen($path, 'rb');
      fpassthru($fp);
    }

    // Extract layer id's from KML files that are to be merged.
    $layerIds = array();
    foreach( $cachedFiles AS $kmlFile) {
      $kmlFile="http://{$_SERVER['HTTP_HOST']}{$kmlFile}";

      $url = "http://maps.google.com/maps/gx?q={$kmlFile}&callback=_xdc_._1fsue7g2w";
      @$c = file_get_contents($url);
      if (!$c)
        throw new Exception("Failed to request {$url} - {$c}");
      preg_match_all('/layer_id:"kml:(.*)"/i', $c, $matches);
      if (count($matches)>0 && isset($matches[1][0])) {
        $layerIds[] = "kml:{$matches[1][0]}";
      }
    }

    // Cache locally.
    if (count($layerIds)>0) {
      header('Content-Type: image/png');
      // Aggregate layers into a single image
      $link = "http://mlt0.google.com/mapslt?lyrs=" . implode(',',$layerIds);
      $link.="&x={$_GET['x']}&y={$_GET['y']}&z={$_GET['zoom']}&w={$_GET['w']}&h={$_GET['h']}&source=maps_api";
      echo $c = file_get_contents($link);
      @file_put_contents($path, $c);
    } else {
      // Output 1x1 png
      header('Content-Type: image/png');
      echo base64_decode('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAC0lEQVQIHWNgAAIAAAUAAY27m/MAAAAASUVORK5CYII=');
    }
  }

Paging GeoXML loading

    function loadGeoXMLPaged(geoXMLUrl) {
      var cc = map.fromLatLngToDivPixel(map.getCenter());
      geoXMLUrl+='&format=JSON&method=getLinks&x='+cc.x+'&y='+cc.y+'&zoom='+map.getZoom();

      if (debug) GLog.writeUrl(geoXMLUrl);

      $.getJSON(geoXMLUrl, function(data) {
	  geoXmlPager = data;
          loadGeoXmlPage();
        });
      }

      var timeoutPID = null;

      function loadGeoXmlPage(){
  	if (data = geoXmlPager.pop()){
	 if (debug)
            GLog.writeUrl(BASE_URL+data);

	 geoXmlStack.push(new GGeoXml(BASE_URL+data));
	 map.addOverlay(geoXmlStack[geoXmlStack.length - 1]);

         GEvent.addListener(geoXmlStack[geoXmlStack.length - 1],"load",function() {
	  timeoutPID = setTimeout("loadGeoXmlPage()", 500);
         });
	}else{
          clearTimeout(timeoutPID);
	  map.setZoom(map.getBoundsZoomLevel(bounds));
          map.setCenter(bounds.getCenter());
          try {
            geoXmlStack[geoXmlStack.length - 1].gotoDefaultViewport(map);
          } catch(e) {}
	}
      }

All the code above has been modified slightly to make it applicable to others, however don’t accept raw input as its simply an example.

I recently came across a peculiar issue that meant dates and times were causing issues with a product we had developed within Australia. The issue being that within “Red Hat Enterprise Linux Server release 5 (Tikanga)” the date within PHP was being read as EST instead of AEST/AEDT, however running “date” from the terminal or running “SELECT NOW()” from MySQL displayed the correct time.

[user@server ~]$ date
Wed Oct 14 22:24:20 EST 2009

[user@server ~]$ php -r'var_dump(date("r"));'
string(51) "Wed, 14 Oct 2009 21:25:07 +1000 Australia/Melbourne"

[user@server ~]$ php -r'var_dump(date("r e"));var_dump(getenv("TZ"));var_dump(ini_get("date.timezone"));var_dump(date_default_timezone_get());';
string(51) "Wed, 14 Oct 2009 21:25:07 +1000 Australia/Melbourne"
bool(false)
string(0) ""
string(19) "Australia/Melbourne"

[user@server ~]$ mysql -uuser -ppassword -e 'SELECT NOW();'
+---------------------+
| NOW()               |
+---------------------+
| 2009-10-14 22:26:12 |
+---------------------+

As you can see php incorrectly gets the time, being an hour off. Running the above on debian worked perfectly fine and comparing the zoneinfo matched my local machine.
[user@server ~]$ md5sum /etc/localtime && md5sum /usr/share/zoneinfo/Australia/Sydney && md5sum /usr/share/zoneinfo/Australia/Melbourne
85285c5495cd5b8834ab62446d9110a9 /etc/localtime
85285c5495cd5b8834ab62446d9110a9 /usr/share/zoneinfo/Australia/Sydney
8a7f0f78d5a146db4bf865ca91cc1c42 /usr/share/zoneinfo/Australia/Melbourne

After a fair amount of digging I ended up coming across the following ticket @478566. Amazingly the ticket is marked as “CLOSED WONTFIX”.

There were a few interesting points from some of the conversations I read.
” Alphabetic time zone abbreviations should not be used as unique identifiers for UTC offsets as they are ambiguous in practice. For example, “EST” denotes 5 hours behind UTC in English-speaking North America, but it denotes 10 or 11 hours ahead of UTC in Australia; and French-speaking North Americans prefer “HNE” to “EST”. twinsun

Due to different locations in Australia having various interpretations of summer time with start/end dates and clock shifts. As well as the operating system not having zoneinfo data for DEST, AEDT etc (unless you create these yourself) it means you cannot rely on the correct time from php on redhat.

So far I have resorted to the following

[user@server ~]$ php -r ‘date_default_timezone_set(“Etc/GMT-11”); var_dump(date(“r”));’;
string(31) “Wed, 14 Oct 2009 22:24:29 +1100”

I have been migrating a large number of websites and consolidating servers to reduce costs.
As a result it is important to ensure that services are migrated smoothly, planned effectively,
after which I had a think about aspects to consider prior to migrating services.

Planning

  • Make a preliminary checklist of services actively in use by each active domain, I.e. FTP, HTTP, SMTP, IMAP, POP3, MySQL etc.
  • What maintenance periods do you have available, if at all?
    • What volume of traffic and when are your quietest periods?
  • Do you have dedicated infrastructure, sharded, split by service/role?
    • Can parts of the infrastructure be migrated as an individual component
  • List core functionality from the domain for testing purposes
    • Ideally this should be wrapped in unit tests as both functional
      • Examples are email, upload (permissions), adding/editing/removing users
  • How many servers are you migrating?
    • Large quantities should be automated.
  • How critical is the site/service
    • Does it stop 80 staff working?

Specific

  • Services
    • Ensure services are initially installed on new server(s).
    • List all configuration files for a particular service (tree).
      • Ensure configuration between each service are identical or compromises are made.
    • List data directories for each service I.e. /var/lib/mysql
      • Can data be transferred automically.
      • Can services be replicated and brought into sync
      • Can data be back filled?
        • I.e Are large log tables required to make the site functional, what is the minimal effort required to bring the site functional?
  • SSL
    • Ensure valid certificate exists for any CDN, sub-domain, domain.
  • Email
    • Are there any special firewall, configuration requirements?
  • DNS
    • Lower the TTL for a domain your preparing to transfer (if possible)
      • Cannot rely on low TTLs, these are cached amongst large corporates, ISPs etc.
    • Ensure the domain is bound to a unique VIP on new servers, if DNS resolution fails, you can put a header(‘Location 10.10.10.10′); in the old site to ensure the domain will resolve correctly.
      • Test this prior to transfer for both HTTP & HTTPS if applicable
  • Permissions
    • Do you upload content to the servers, does your code write to the filesystem?
      • Is this writtable?
    • Under which user/group is this written?
  • Cache
    • Does your site make use of distributed or local cache?
      • Could there be collisions between different sites, I.e. Do you prefix cache key names based on site?
  • Networking
    • Can specific services be migrated prematurely?
      • Repoint via iptables, and keep an eye on bytes passing through the interface till redundant
  • Security
    • Were there any firewall restrictions that need to be replicated, either hardware, iptables etc.
    • Chrooted, users copied, ssh keys copied.
  • Optimizations
    • Were there any special optimizations, I.e. DnsMasq?, sysctl changes?
  • Load balancing
    • Ensure each domain has its own VIP – HTTP_HOST fails in HTTP 1.0 clients
    • Ensure wild cards are not specified within virtual hosts – see above
    • Ensure sites with load balancing and over SSL use TCP requests correctly, in addition see first point.
    • ifdown each VIP in the webserver pool, does it failover with the correct site on all nodes?
  • Monitoring
    • If previously had monitoring on servers (should do), has this been replicated to new servers?
  • Database (Will vary depending on setup)
    • Is the database replicated?
      • Take LVM snapshots of the raw data on slave and rsync to new servers.
        • Ensure to change configuration such as server id’s, permissions on master, firewall, start service and start replication. Will be ready to start replicating with correct binlog positions etc.
  • Other general changes
    • Are there customizations to /etc/hosts get sites working?

Let me know if there is anything you think I have missed.

Recently we had an issue with one of our hosting providers load balancing (LVS), which resulted in some very small outages. As a result we decided to setup our own load balancing that we had full control over, and could manage ourselves. In addition to choosing a better suited weighting algorithm.

Each webserver is setup using ucarp an implementation of Common Address Redundancy Protocol (CARP) allowing failover of a single Virtual IP (VIP) for high availability. We bound multiple VIPs for each host as we noticed some HTTP 1.0 clients incorrectly sending the host address to the server.

There are many ways you can then proxy the webservers and load balance, however we decided to use haproxy. This can also be acheived by pound, apache mod_proxy, mod_backhand etc.

In order to setup ucarp & haproxy:

apt-get install -y haproxy ucarp

Modify /etc/network/interfaces giving each interface a unique ucarp-vid and adjust ucarp-advskew for weighting on each server (increment by one for each server) and set ucarp-master to yes if it is to be the master. Modify the configuration below appropriately.

# The primary network interface
auto eth0
iface eth0 inet static
        address   10.10.10.2 # IP address of server
        netmask   255.255.255.255
        broadcast 10.10.10.10
        gateway   10.10.10.1
        ucarp-vid 3
        ucarp-vip 10.110.10.20 # VIP to listen to
        ucarp-password password
        ucarp-advskew 10
        ucarp-advbase 1
        ucarp-facility local1
        ucarp-master yes
iface eth0:ucarp inet static
        address 10.10.10.20# VIP to listen to
        netmask 255.255.255.255

To bring the interface up, simply run the following:

ifdown eth0; ifup etho0
ifdown eth0:ucarp; ifup eth0:ucarp

In order to configure haproxy:

sed -i -e ‘s/^ENABLED.*$/ENABLED=1/’ /etc/default/haproxy

Reconfigure apache to listen only on local interfaces (/etc/apache2/ports.conf):
So replace “Listen 80″ with
Listen 10.10.10.20:80
Listen 10.10.10.2:80

edit /etc/haproxy/haproxy.cfg

listen web 10.10.10.20:80
        mode http
        balance leastconn
        stats enable
        stats realm Statistics
        stats auth stats:password
        stats scope .
        stats uri /stats?stats
        #persist
        server web1 10.10.10.2:80 check inter 2000 fall 3
        server web2 10.10.10.3:80 check inter 2000 fall 3
        server web3 10.10.10.4:80 check inter 2000 fall 3
        server web4 10.10.10.5:80 check inter 2000 fall 3
        server web5 10.10.10.6:80 check inter 2000 fall 3

Then restart haproxy with /etc/init.d/haproxy restart

Carp & HA Load Balancing

After changing your DNS to point to 10.10.10.20 you will be able to see the traffic balanced between the servers by going to the URL http://10.10.10.20/stats?stats with the credentials assigned above and see the bytes balanced between the servers listed.

Some other alternatives are:

The Project

I was recently working on a project to expose our trading systems via XmlRpc, Rest and SOAP. It was quite an interesting project, which took two of us three weeks to develop (Amongst other things).

This involved creating a testbed, that would automatically generate the payload and response for each protocol. The parameters are introspected for each class method capturing each parameters data type, allowing for user input via standard html forms. This is probably best described with a picture or two.

Most of the documentation was generated via reflection and comments within the docblocks, parameters, notes were also generated making it quick and simple to update. In addition to parsing the start and end line of each method for any applicable error codes/faults that may be returned.

Rest API interface

XmlRpc API Interface - executed API method

Zend Framework

Using the Zend Framework for the first time in a commercial product was not exactly hassle free, and still has quite a few issues with its webservices implementation. Currently there seems to be quite a bit of confusion regarding its Rest implementation and whether it is to be merged, would be great if someone clarify this.

The main issue I found with the Zend Frameworks implementation of XmlRpc and Rest is that it assumes that the payload it receives is valid. During my development, I tended to mix the payloads from SOAP, XmlRpc and Rest, yet it would assume that simple_xml can parse the input.

For example $this->_sxml is assumed to be a valid object, if not you will either get invalid method call or an undefined index, which doesn’t render well for an xmlrpc server.

    /**
     * Constructor
     *
     * @param string $data XML Result
     * @return void
     */
    public function __construct($data)
    {
        $this->_sxml = simplexml_load_string($data);
    }

    /**
     * toString overload
     *
     * Be sure to only call this when the result is a single value!
     *
     * @return string
     */
    public function __toString()
    {
        if (!$this->getStatus()) {
            $message = $this->_sxml->xpath('//message');
            return (string) $message[0];
        } else {
            $result = $this->_sxml->xpath('//response');
            if (sizeof($result) > 1) {
                return (string) "An error occured.";
            } else {
                return (string) $result[0];
            }
        }
    }

One of the main issues with Rest was that it needed ksort when using the Rest client as the arguments were not necessarily passed in order. This can be “rest.php?method=x&arg1=1&arg0=0″ and it would interpret each arg in the order it received them. This should be sorted in the next release of the ZF.

As the webservices we are exposing needs to have quite good performance with the number of transactions it will be handling and the amount of reflection that Zend Server Reflection (Only noticed after I started profiling) performs and I wanted to optimize any overhead, which got me looking at Zend_XmlRpc_Server_Cache. First thing I did was profile Zend_XmlRpc_Server_Cache, which added a considerable amount of overhead. Looking at its implementation, it uses serialize, which is a relatively slow process and should be avoided, unless there is a large overhead in initializing objects. So most likely Zend_XmlRpc_Server_Cache will not add any benefit. And var_dump’ing out the reflection in XmlRpc spews out a shocking amount of information on some fairly large classes.

  if (!Zend_XmlRpc_Server_Cache::get($cacheFile, $server)) {

  }

Generating WSDL

I tried a number of WSDL generators including the implementation in incubator for ZF, which I found to be the best, yet I still had to write a large chunk of the WSDL by hand and adapt it.

The best way to debug is to run the soap client with verbose mode on, and it will typically tell you the issue straight away.

  • Zend_Soap_AutoDiscover: Duplicates an operation in WSDL for methods with parameters that are optional. (ZF-2642)
  • Zend_Soap_AutoDiscover: If missing the @return in your docblock the message response in the WSDL is not generated. (ZF-2643)
  • AutoDiscover duplicates response if using set class multiple times. (ZF-2641 )
  • One of my colleagues typically writes their docblocks with “@return int, comment.”, which the comma caused return types to be dropped with AutoDiscover, more of an issue with Zend Server Reflection

Other odd issues

Raw input bug

Some other obscurities I found was capturing the raw request data. In our local development environment reading the raw request input, and then once again within the Zend Frameworks appears to work fine. However in our pre-production environment it fails to read the second request to read the raw request. (PHP 5.2.2)


if (!isset($HTTP_RAW_POST_DATA)){
$HTTP_RAW_POST_DATA = file_get_contents('php://input');
}

It does seem a little odd that the XmlRpc does not check whether $HTTP_RAW_POST_DATA isset before attempting to re-read raw input.

Internal error: Wrong return type

Whilst running PHPUnit I noticed a very weird quirk in our local dev environment, which essentially did the following… You would expect this to output the contents of an array right? Well between the method call to x and return the result back to method y returns NULL. This is very obscure and i’ve never seen anything like it especially considering it is explicitly set. I had a number of colleagues check this, which had us all scratching our heads. Has anyone else seen anything similar to this?

class test {

  public function x() {
    $ret = array();
    for(...) {
      $ret[] = $row;
    }
    return $ret;
  }

  public function y() {
    $response = $this->x();
    var_dump($response);
  }
}

$t = new test();
$t->y();

Conclusion

Overall the project went pretty well, I’m confident it is now stable especially with the number of tests we ran against it. It is adaptable to other projects that we may need to expose via an API, in total there is about 6000 lines of code alone just testing the 3 different protocols it supports. I would have rather avoided the Rest implementation with ZF as it still needs a lot of work, however XmlRpc is a lot more stable and I would quite happily use again. As there is a lot of overhead with reflection it is not the fastest implementation and was contrasted to some of the heavier web pages we have for some simple functionality. It would be ideal to replace the reflection with something lighter such as an array with the corresponding methods, parameters and types, however I would have to look into that if performance did become a major issue.

PS. Just to note I used PHP’s in built soap server.

C++

In: Linux

8 Jul 2007

I’ve had alot of experience with other programming languages, however I had to learn C++ from scratch in a very short period of time, a number of weeks ago. This was to develop a real-time stock quote client, the goal was simply to push data from remote servers into our databases, filter what messages it would receive and get something up and running fast as deadlines lingured. This was simple enough, however with the rush the application had its inherent flaws, due to my lack of knowledge of C++, the API, and the goals it had to acomplish.

I’ve since had time to learn a little more C++ and limited time to design the application properly.

The Problems

The core problems with the application:

  • refactor, refactor, refactor
  • database connection pooling
  • Query remote CSP servers*1
  • Query remote CSP servers*1 from PHP
  • Configuration management
  • Monitoring
  • Flexible Database schema
    • Add columns to database schema dependent on datatype.
    • Log messages in XML per trade message with date/time, columns and values.

Compatible GCC

The first issue was that I used an API from interactive-data, which was compatible with “gcc version 3.2.3″ and is not kept up to date. This meant compiling a compatible gcc from source for 32bit platforms only.


./configure --prefix=/usr/local/gcc/ --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --disable-checking --with-system-zlib --enable-__cxa_atexit --enable-languages=c,c++,objc,obj-c++

make bootstrap
cd gcc
make
sudo make install

Once having a compatible compiler, I then had to make modifications to the Makefile, move a number of lib/so files to get MySQL to compile and get things working. Unfortunately I did not have a local machine to attach a debugger, so everything was trial and error from the command line with g++32, which makes life difficult identifying runtime errors.

The Logic

Once everything was in place, the logic was fairly simple, foreach field retreived construct a query with the field name, checking the fields values datatype whether it be a datetime, varchar etc. Insert each trade message in a table, update another and if either failed, check if the fault was due to a missing column, if so add it and re-execute queries.

The problem soon arrises when you need to know when each column was actually last updated, with which field, value, datetime and the last insert id for the trade messages. Whilst looping through each trade message, I constructed an XML schema containing the above, however the tricky part is to ensure that it only updates the fragment matching the field in the schema. Not an ideal format to query from a database.

Storing Data

One of the fundemental issues is managing and storing data. For some exchanges you don’t want to store every trade message; simply storing the current data for a number of instruments is enough. Which servers or databases do you peg data to? If one database goes down, how do you handle fault tolerance? MySQL cluster is not a feasible solution, requiring multiple servers and large memory requirements per installation. The databases are highly susceptible to curruption or faults. Also particular sites may require data from multiple exchanges, so seperating trade messages per database is not also ideal.

All of this fundamentally comes down to configuration management.

Configuration

One of the fundamental aspects of the application is configuration management. This contains where data should be stored for a particular exchange, the type of data to store, whether it is per trade message, current data or both. Which servers to source data from, whether it is real time or delayed, whether to source data for bonds, equities, automated trades etc… All queries can be grouped, or to query remote servers. Some of the products for example just for the London Stock Exchange is:

  • London Stock Exch – Covered Warrants L1
  • London Stock Exch – International Equity Mkt Service L1
  • London Stock Exch – International Equity Mkt Service Level 2
  • London Stock Exch – UK Equity Mkt Service L1
  • London Stock Exch – UK Equity Mkt Service Level 2 (Depth Refresh)
  • London Stock Exchange: UK Equity Market Service Level 2

All of which is stored in several database tables and managed via a MySQL database and PHP frontend.

In high performance web applications you will always have bottlenecks within your application. Identifying these bottlenecks and optimizing is a tedious task and typically show themselves underload. A single bad/unindexed query can bring a server to its knees. A large number of rows will also help to highlight any poor queries, and on very large datasets you may come to the point where you may have to make decisions whether to denormilize database schema.

Explain each page

Whilst I develope sites, I typically print out all queries, EXPLAIN each select statement at the bottom of each page, and highlight it red if its doing a full table scan, temp tables or a filesort. As well as displaying SHOWS INDEXES FROM TABLE…

Not only will it help you to optimize sites, you can also see bad logic and areas to optimize such as a query for each loop when looking through a users table for example.

MySQL indexing optimization

How do you identify where bottlenecks occur?

One of my favourite linux commands lately is the watch command. For Mac users you can get this from macports via “sudo port install watch”. Also a few other handy applications are mysqlreport, mytop.

# Appends file with processlist
watch -n1 “mysqladmin -uroot processlist >>watch.processlist.txt”

# Count the number of locked processes
watch -n1 “mysqladmin -uroot processlist | grep -i ‘lock’ | wc -l “;

# Count the number of processes sleep
watch -n1 “mysqladmin -uroot processlist | grep -i ’sleep’ | wc -l “;

# Run a specific query every second
watch -n1 “mysql -uadmin -p`cat /etc/psa/.psa.shadow` trade_engine –execute “SELECT NOW(),date_quote FROM sampleData WHERE 1=1 AND permission = ‘755′ AND  symbol=’IBZL’ GROUP BY date_quote;” ”

# Emails mysqlreport every 60 seconds
watch -n60 mysqlreport –all –email andrew@email.com

# Displays process list as well as appending the contents to a file
watch -n1 “mysqladmin -uadmin -p`cat /etc/psa/.psa.shadow` processlist | tee -a process.list.txt”

Watching the processlist is very handy in identifying locked, sleeping or sorting process states. If you have a large number of locked processes you typically should change the table type to INNODB, which supports row level locking. if you have a large number of sleeping connections, and you have persistent connections enabled, most likely indicates that connections are not being reused.

Running a specific query every second is exceptionally handy, the example I gave indicates whether one of our crons is correctly functioning and as each row is inserted you can watch something being either inserted or updated. mysqlreport gives numerous peices of information, extremely helpful in identifying issues, you can see more indepth at hackmysql.com/mysqlreportguide.

Look at the mysql slow query log and optimize each query starting with the most common, think whether you have to execute that query at all and use a cache such as memcached.

I also typically tend to look at the following:

  • vmstat -S M
  • ps axl | grep -i ‘mysql’
  • pstree –G
  • free –m

Reference:
http://dev.mysql.com/tech-resources/presentations/presentation-oscon2000-20000719/index.html

Installing memcached

In: Linux|PHP

7 Apr 2007

Recently I had to install memcache on a number of servers, and I would always tend to end up with errors whilst memcache tries to locate libevent. I always seem to forgett LD_DEBUG, so I figured I would write up the process for installing memcache.

One of the dependencies of memcache is libevent, so firstly download the source files for Libevent.


tar -xvf libevent-1.3b.tar.gz
cd libevent-1.3b
./configure;make;make install;

Download the latest Memcached source code from danga.com


gunzip memcached-1.2.1.tar.gz
tar -xvf memcached-1.2.1.tar
cd memcached-1.2.1
./configure;make;make install;

Often libevent.so cannot be found when executing memcache. A useful command LD_DEBUG, is very helpful to determine where libraries are being loaded from.


LD_DEBUG=help memcached -v

LD_DEBUG=libs memcached -v 2>&1 > /dev/null | less
18990: find library=libevent-1.3b.so.1 [0]; searching
...
18990: trying file=/usr/lib/libevent-1.3b.so.1
18990:
memcached: error while loading shared libraries: libevent-1.3b.so.1: cannot open shared object file: No such file or directory

Simply place the library where memcached will find it and execute memcached.


ln -s /usr/local/lib/libevent-1.3b.so.1 /lib/libevent-1.3b.so.1
memcached -d -u nobody -m 512 127.0.0.1 -p 11211

The options for memcached are:

-l <ip_addr>
Listen on <ip_addr>; default to INDRR_ANY. This is an important option to consider as there is no other way to secure the installation. Binding to an internal or firewalled network interface is suggested.
-d
Run memcached as a daemon.
-u <username>
Assume the identity of <username> (only when run as root).
-m <num>
Use <num> MB memory max to use for object storage; the default is 64 megabytes.
-M
Instead of throwing items from the cache when max memory is reached, throw an error
-c <num>
Use <num> max simultaneous connections; the default is 1024.
-k
Lock down all paged memory. This is a somewhat dangerous option with large caches, so consult the README and memcached homepage for configuration suggestions.
-p <num>
Listen on port <num>, the default is port 11211.
-r
Maximize core file limit
-M
Disable automatic removal of items from the cache when out of memory. Additions will not be possible until adequate space is freed up.
-r
Raise the core file size limit to the maximum allowable.
-h
Show the version of memcached and a summary of options.
-v
Be verbose during the event loop; print out errors and warnings.
-vv
Be even more verbose; same as -v but also print client commands and responses.
-i
Print memcached and libevent licenses.
-P <filename>
Print pidfile to <filename>, only used under -d option.

To install the pecl package for PHP

wget http://pecl.php.net/get/memcache-2.1.2.tgz
gzip -df memcache-2.1.2.tgz
tar -xvf memcache-2.1.2.tar
cd memcache-2.1.2
phpize
./configure;make;make install;

Add memcache.so to the php.ini file

extension=memcache.so

Then run

php -i | grep -i 'memcache'

memcache should be listed and then restart the web server.

For further information:
Distributed Caching with Memcached

Introduction

Currently I’m working with stock market data, and its quite an interesting topic when we are getting to the point of real time data as it brings a number of new concepts into the mix. The first challenge is to import information from the feeds into our databases (MySQL), whilst this should be a relatively straight forward task, I’m sure we are going to hit issues in terms of writes to the database (INSERTS/UPDATES). The information from these feeds will be used for various tasks, that will require alot of processing. The information displayed to the user will be via the web, therefore we have to maintain updated stock market information dynamically to the user via the use of AJAX.

The concept of real time computing should ideally be under 1 millisecond, however I have previously worked for companies where their distinction of real time meant a 15 minute delay. Whilst delays over the web are inevitable I believe a one to three second delay would be acceptable for users to view current information via AJAX.

Replication

As we will be using the data from the stock market for multiple applications, we will need to replicate the data from MySQL, this will only add a further bottleneck in the application. Most notably performance with replication will become an issue because every slave still needs to execute the same write queries as the master. Whilst the majority of queries, will be writes over reads, this becomes a fundamental problem in itself, making replication questionable. So we will have to look at multi-master MySQL server setup, or MySQL cluster, which holds databases in memory. The fundamental problem with replication is ensuring the consistency of the data between writes once replicated. Ideally if a slave falls behind we want to ignore Updates, that have previously been issued and just use the current values to ensure we do not have stale data.

Heartbeat

We will ideally have to create a heartbeat monitor and validate the latency of data between nodes. As mentioned previously we would want to ensure that all slaves do not fall behind, however any slave that did fall behind we would want to ensure that updates for stocks were only applied with the latest and the rest of the binary log is ignored. Additionally we would need to seperate inserts for historical data to be inserted based on a sample time (‘1 Min’,’15 Min’,’Hour’,’Midday’,’End Of Day’,’End Of Week’,’End Of Month’), ideally this would most benefically be horizontally scaled.
This could be extended to monitor the latency of the end user and notify them that the data is out of date between the last sync with a little javascript.

Website Data

The website itself will have to use AJAX to dynamically update all stock prices and activity in the market that are applicable on that page. The fundamental issue is that the prices are updating in real time, how often do we create a http request that is with in reason on server resources? Looking at this further, we will have the bottleneck of TCP/IP connections, the clients bandwidth, ideally testing users bandwidth, and whether the client accepts gzip or compressed content to reduce bandwidth costs.

AJAX request every second, servers typically handles 200 requests per second
say 25 users online, 25*60 =1500 requests per minute or 2,160,000 p/d
say 100 users online, 100*60 =6000 requests per minute or 8,640,000 p/d

We could optionally increase the clients connection limit in internet explorer, with a registry key to increase the
2 connection limit standards from rfc 2216 for persistent connections to http 1.1 agents.
IE7 release does not increase this limit by default however this is more notable when a user downloads 2 files and IE waits for the connection to release before starting a 3rd download for example.

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionInternet Settings]
"MaxConnectionsPerServer"=dword:00000010
"MaxConnectionsPer1_0Server"=dword:0000010

recently read “Binaries Belong in the Database Too” on sitepoint.com, and thought I would shed some light with regard to my experience of storing files in databases. I’m sure many of you have known this to be a taboo practice, and I would certainly agree depending on the database. A project I worked on for MTV Networks Europe/International required a completely shared nothing architecture. This meant that MTV’s hosting & operations imposed that I stored files in the database, and expressed my hesitation.

The platform:

* Linux
* Apache
* MySQL
* PHP5

The problems

You typically get the common file upload problems with upload_max_filesize, max_input_time, execution time however you also have issues with mysql connections and max packet sizes, mysql chunked streams. Uploads via JUpload allows for large file uploads, however you still encounter TCP/IP connection interuptions and errors. Some of the more major issues I encountered were with the actual management of the data. Currently MySQL has no real support for handling Binary Large Objects, for example if you try to load data in from a file you generally encounter max packet size errors. Although the most fundamental issue is that the MySQL protocol does not send chunked streams for blobs and the client has to load the entire blob into memory. Admittedly memory limits on the server were not too much of an issue, as I was using a 8 CPU 16GB of ram server, however you may not have some of the infrastructure that I had available.

Whilst there were a number of limitations I had to resolve as described above, some of the expectations that I had not anticipated for were some user errors, such as trying to upload 4mb BMP files to be streamed as images for a website. Other factors were that hosting & operations had not expected their adsales department to attempt to upload 120+ mb video files.

DataTypes

Firstly lets look at some of the limitations on the BLOB datatype in MySQL, as you can see there are length limitations on blobs.

TINYBLOB
A BLOB column with a maximum length of 255 (2^8 - 1) bytes.

BLOB[(M)]
A BLOB column with a maximum length of 65,535 (2^16 – 1) bytes.
Beginning with MySQL 4.1, an optional length M can be given. MySQL will create the column as the smallest BLOB type largest enough to hold values M bytes long.

MEDIUMBLOB
A BLOB column with a maximum length of 16,777,215 (2^24 – 1) bytes.

LONGBLOB
A BLOB column with a maximum length of 4,294,967,295 or 4GB (2^32 – 1) bytes. Up to MySQL 3.23, the client/server protocol and MyISAM tables had a limit of 16MB per communication packet / table row. From MySQL 4.0, the maximum allowed length of LONGBLOB columns depends on the configured maximum packet size in the client/server protocol and available memory.

Alternatives for storing >4Gb BLOBs are:
* Compressing the BLOB so that it fits in 4Gb
* Splitting up the BLOB into 4Gb chunks as separate rows columns.

Tips:

Get Blob length
To find the length in bytes of a stored BLOB. Simply use: SELECT LENGTH(blobcolumn) FROM table.

Get Blob fragment
To retrieve large BLOBs by using repeatedly retrieving only fragments of a BLOB


using substring, ie:
SELECT SUBSTRING(document, 1, 10240) FROM documents WHERE did=3;
and then
SELECT SUBSTRING(document, 10241, 10240) FROM documents WHERE did=3;
etc.

Inserting Blobs
Inserting data into BLOBs. It has to be inserted in hex ie: ‘A’ = 0×41 and ‘AB’ = 0×4142 and so on. The prefix is a zero not a cap o.

If you want to insert binary data into a string column (such as a BLOB), the following characters must be represented by escape sequences:

NUL 	NUL byte (ASCII 0). Represent this character by '�' (a backslash followed by an ASCII '0' character).
 	Backslash (ASCII 92). Represent this character by '\'.
' 	Single quote (ASCII 39). Represent this character by '''.
" 	Double quote (ASCII 34). Represent this character by '"'.

When writing applications, any string that might contain any of these special characters must be properly escaped before the string is used as a data value in an SQL statement that is sent to the MySQL server, base64 encoding is a good option.

Indexing Blobs


Blobs can sometimes can be indexed, depending on the storage engine you’re using:
MyISAM, InnoDB, and BDB tables support BLOB and TEXT indexing. However, you must specify a prefix size to be used for the index.This avoids creating index entries that might be huge and thereby defeat any benefits to be gained by that index.The exception is that prefixes are not used for FULLTEXT indexes on TEXT columns. FULLTEXT searches are based on the entire content of the indexed columns, so any prefix you specify is ignored.
MEMORY tables do not support BLOB and TEXT indexes.This is because the MEMORY engine does not support BLOB or TEXT columns at all.

BLOB or TEXT columns may require special care:
Due to the typical large variation in the size of BLOB and TEXT values, tables containing them are subject to high rates of fragmentation if many deletes and updates are done. If you’re using a MyISAM table to store BLOB or TEXT values, you can run OPTIMIZE TABLE periodically to reduce fragmentation and maintain good performance.

The max_sort_length system variable influences BLOB and TEXT comparison and sorting operations. Only the first max_sort_length bytes of each value are used. (For TEXT columns that use a multi-byte character set, this means that comparisons might involve fewer than max_sort_length characters.) If this causes a problem with the default max_sort_length value of 1024, you might want to increase the value before performing comparisons. If you’re using very large values, you might need to configure the server to increase the value of the max_allowed_packet parameter. See Chapter 11,“General MySQL Administration,” for more information.You will also need to increase the packet size for any client that wants to use very large values.The mysql and mysqldump clients support setting this value directly using a startup option.

Solution

The solution ended up utilizing 2 memcached servers that cached blobs and objects between the MySQL server, this saved streaming the content directly from MySQL upon each request. Then selecting chunks of data from a binary large object and concatenating the results alleviates maximum packet errors from MySQL. The only other aspects to alleviate are the initial upload, this is entirely upto you, how this is implemented whether it is via JUpload, SCP, FTP, or some other means. Finally increase the above settings. To import / export binary files I wrote a script that queried the database writing out the files, by chunking the data, this script did take a while to execute.

I have heard that Oracle has very good support for handling Binary Large Objects… Maybe thats something to look into..

Pointers.

http://jeremy.zawodny.com/blog/archives/000078.html
http://jeremy.zawodny.com/blog/archives/000840.html
http://www.lentus.se/warehouse/SlidesDW.ppt
http://sunsite.mff.cuni.cz/MIRRORS/ftp.mysql.com/doc/en/BLOB.html

About this blog

I have been a developer for roughly 10 years and have worked with an extensive range of technologies. Whilst working for relatively small companies, I have worked with all aspects of the development life cycle, which has given me a broad and in-depth experience.