Sonntag, 10. Januar 2016

Migrate Blog to blogger.com

I was running a blog for many years, it was once developed on Java technology by myself. It has always been fun developing this blog, I was my playground for trying out new technologies.
But over time it became a burden to me. It was obvious that design and technology is moving on, but I did not have the time to update my own implementation. Furthermore, the implementation was big mess by now.
Of course I was not willing to give up the 600 posts, because this is an important documentation of my life.

So I started to look into possibilities of migrating to a cloud blogging software. I ended with blogger.com because it has a good REST API and even a Java client library which utilizes this API.
But I was googling for quite a long time to understand the propper usage of this library. There are different versions of the API and even Google has wrong examples on its website.
Therefore I am posting my solution how I finally managed to use the Blogger API v3 with Goole's Java client library. Maybe it will help somebody.
But there is one big obstacle: The blogger API allows only 50 post creations per day. And  there is no way around it. It contacted Google in severaly ways. They suggested to submit this as a feature request, what I did.

You will get this error when you post more than 50.

Exception in thread "main" com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
  "code" : 403,
  "errors" : [ {
    "domain" : "usageLimits",
    "message" : "Rate Limit Exceeded",
    "reason" : "rateLimitExceeded"
  } ],
  "message" : "Rate Limit Exceeded"
}
    at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:145)

...
}

The result is the blog at blog.alpenkarte.eu

Finaly the code that worked for me:

import java.io.File;
import java.net.URL;
import java.util.Collections;
import java.util.List;

import com.google.api.client.auth.oauth2.Credential;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.client.util.DateTime;
import com.google.api.client.util.store.FileDataStoreFactory;
import com.google.api.services.blogger.Blogger;
import com.google.api.services.blogger.Blogger.Posts.Insert;
import com.google.api.services.blogger.BloggerScopes;
import com.google.api.services.blogger.model.Post;

public class BloggerAccess {
   
    private static final String BLOG_ID = "1234";
    private static final String SERVICE_ACCOUNT_EMAIL = "demo@gmail.com";

   
      private static NetHttpTransport HTTP_TRANSPORT;
      private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
      private static final java.io.File DATA_STORE_DIR =
              new java.io.File(System.getProperty("user.home"), ".store/plus_sample");
     
      private static final String[] SCOPES =
                new String[] {
                  "https://www.googleapis.com/auth/blogger"
                };
       
      private static FileDataStoreFactory dataStoreFactory;



   
    public static Post savePost(String content, String title, DateTime date, String metaData, List<String> labels)  throws Exception {

        HTTP_TRANSPORT = GoogleNetHttpTransport.newTrustedTransport();
        dataStoreFactory = new FileDataStoreFactory(DATA_STORE_DIR);

        Credential credential = authorizeService();

        // Construct the Blogger API access facade object.
        Blogger blogger = new Blogger.Builder(HTTP_TRANSPORT, JSON_FACTORY, credential).setApplicationName("BlogMigration").build();
        Post post = new Post();
        post.setContent(content);
        post.setTitle(title);
        post.setPublished(date);
        post.setCustomMetaData(metaData);
        post.setLabels(labels);
        Insert insert = blogger.posts().insert(BLOG_ID, post);

        Post postResponse = insert.execute();
        String postURL = postResponse.getUrl();
        System.out.println("POST URL: " + postURL);
        return postResponse;
    }
   
   
   
    private static Credential authorizeService() throws Exception {
         
        URL url = BloggerAccess.class.getResource("BlogMigration-e3489c32f480.p12");
        File authFile = new File(url.getPath());
         
          GoogleCredential credential = new GoogleCredential.Builder()
                      .setTransport(HTTP_TRANSPORT)
                    .setJsonFactory(JSON_FACTORY)
                    .setServiceAccountId(SERVICE_ACCOUNT_EMAIL)
                    .setServiceAccountScopes(Collections.singleton(BloggerScopes.BLOGGER))
                    .setServiceAccountPrivateKeyFromP12File(authFile)
                    .build();
         
         //Get the Access Token from https://developers.google.com/oauthplayground/               credential.setAccessToken("ya29.ZQIAFc34WrvahPSGiChzybthrltfra6i2Va3WNaRkaSPC8gfL4XkTJgv8884fxO4R5c7");
         
          return credential;
   
        }

}

Montag, 3. Februar 2014

Provide location based weather data

OpenWeatherMap.org has a great simple REST API for current weather and weather forecast. My goal was to use this API to provide weather information for the map area wich a user is looking at.

My requirements were:
  1. Asynchronous integration through messaging to avoid any performance impact.
  2. Reduce service calls to OpenWeatherMap.org API, since weather does not change any 10 mintes and is not different any 100 meters. 


There result is the architecture above. The clients are sending geo coordinates to a topic and can fetch the current weather information and weather forecast from a different topic, filtered by their client ID.

Works pretty well, the amount of requests to the OpenWeatherMap API is reduced by approximately 80% by using ehcache. I store weather information as JSON String in-memory in Ehcache which guarantees  very fast data access times.





Sonntag, 10. Februar 2013

Visualize the slope gradient (Hangneigung)

To outline the steepness of a slope by colorizing the map is much simpler than I thought. The GDAL utilities have it built in (http://www.gdal.org/gdaldem.html)

Therefore it is the same process as creating the hillshading. The raw data is again the SRTM data which origins from a NASA shuttle mission in 2000. You can download this data as *.hgt.zip files.

Assuming that you have all needed files in a directory, you can create the slope gradient visualization like this:


#!/bin/bash
 for FILENAME in $(find /opt/mapnik/data/shades/raw/raw -name "*.hgt.zip")
    do
        echo $FILENAME
        /opt/mapnik/data/shades/raw/srtm_generate_hdr.sh $FILENAME
        FILEBASENAME="`basename $FILENAME .hgt.zip`"
        rm $FILEBASENAME.prj
        rm $FILEBASENAME.hgt
        rm $FILEBASENAME.bil
        rm $FILEBASENAME.hdr
        gdal_translate -of GTiff -co "TILED=YES" -a_srs "+proj=latlong" $FILEBASENAME.tif ${FILEBASENAME}_adapted.tif
        rm $FILEBASENAME.tif
        gdalwarp -of GTiff -co "TILED=YES" -srcnodata 32767 -t_srs "+proj=merc +ellps=sphere +R=6378137 +a=6378137 +units=m" -rcs -order 3 -tr 30 30 -multi ${FILEBASENAME}_adapted.tif ${FILEBASENAME}_warped.tif
        rm $FILEBASENAME_adapted.tif
        gdaldem slope ${FILEBASENAME}_warped.tif ${FILEBASENAME}_slope.tif
        rm ${FILEBASENAME}_warped.tif
        gdaldem color-relief ${FILEBASENAME}_slope.tif color_slope2.txt rendered/${FILEBASENAME}_slopecolor.tif
        rm ${FILEBASENAME}_slope.tif
    done

The *_slopecolor.tif files contain the colors for the slope gradient.

Here is where I got the information from, this post explains how to set the colors: http://blog.thematicmapping.org/2012/06/creating-color-relief-and-slope-shading.html

Sonntag, 15. Juli 2012

Ubuntu 10.04 has old PostGIS version

For the setup of my mapnik cloud server, I also need a new PostGIS database. I do not want to have my data on the cloud server, therefore I installed the database on a permanently hosted server (and pray that the connection between both servers is fast enough). This server runs on Ubuntu 10.04 and I followed these installation instructions for the DB setup:  http://switch2osm.org/serving-tiles/manually-building-a-tile-server/
But what a disappointment,  the Ubuntu package postgresql-8.4-postgis installs a verision 1.4 of Postgis on the 8.4 Postgres. 
But I have already migrated all map data from my PostGIS 2.0 installation and only hit the problem when I wanted to migrate the elevation data. But this doesn't work. The required database functions are not compatible.
Now I have to start from the beginning again.

Montag, 9. Juli 2012

Setting up Mapnik Cloud Server (Ubuntu 11.10 on Amazon EC2)

Rendering with mapnik is very CPU intensive, my PC needs about 500 hours to make a complete rendering of the alps from zoom level 1 to 15. This is frustrating.
It doesn't make sense to buy a powerful server either, because you don't want to render the map every day. This is a great use case for using the cloud.
A cloud server is a bare server with a standard OS installation only. And a server looses all data and configuration when you shut it down. (At least when you don't want to pay for a server which is not running. And this is the core idea of cloud computing)
Therefore it is necessary to have a setup script to fully enable the server for the intended function after the boot process. In case of mapnik, this script needs to install libraries prerequisites for mapnik, compile and install mapnik and put all configurations in place.
I don't want to place the PostGIS Database onto the cloud server. This server is supposed to be a stateless computation unit. The data must be located somewhere else. I will describe my complete setup in another post.

Here is the setup script. It is derived from the description on switch2osm. But I had to modify the libraries and added the mapnik configuration setup.

(
# INSTALL LIBRARIES
sudo apt-get update
sudo apt-get -y install git-core libltdl-dev libpng-dev libicu-dev libboost-python-dev python-cairo-dev python-nose libboost-dev libboost-filesystem-dev libboost-iostreams-dev libboost-regex-dev libboost-thread-dev libboost-program-options-dev libboost-python-dev libfreetype6-dev libcairo2-dev libcairomm-1.0-dev libgeotiff-dev libtiff4 libtiff4-dev libtiffxx0c2 libsigc++-dev libsigc++0c2 libsigx-2.0 libsigx-2.0-dev libgdal1-dev python-gdal imagemagick ttf-dejavu libxml2-dev subversion
# INSTALL MAPNIK
sudo mkdir /opt/mapnik
sudo chown ubuntu.users /opt/mapnik
cd /opt/mapnik
git clone git://github.com/mapnik/mapnik
cd mapnik
git branch 0.7 origin/0.7.x
git checkout 0.7
python scons/scons.py configure INPUT_PLUGINS=all OPTIMIZATION=3 SYSTEM_FONTS=/usr/share/fonts/truetype/
python scons/scons.py
sudo python scons/scons.py install
sudo /sbin/ldconfig
# GET MAPNIK STYLE
svn co -r 27279 http://svn.openstreetmap.org/applications/rendering/mapnik mapnikstyle
# GET SHADING IMAGES WHICH HAVE ALREADY BEEN RENDERED
mkdir mapnikstyle/shades
cd mapnikstyle/shades
wget http://{my_server}/hillshades.tar.gz
tar -xvzf hillshades.tar.gz
# GET CONFIGURATION FOR MAPNIK
cd ..
wget http://{my_server}/osm.xml
wget http://{my_server}/datasource-settings.xml.inc
mv datasource-settings.xml.inc inc
wget http://{my_server}/settings.xml.inc
mv settings.xml.inc inc
wget http://{my_server}/fontset-settings.xml.inc
mv fontset-settings.xml.inc inc
) >> /var/log/mapniksetup.log 2>&1

Freitag, 6. Juli 2012

Mapnik rendering CPU load

Ok, I know that my PC has a hard time rendering the tiles with python. But he shouldn't exaggerate. Such a sissy.


New style for tracks and paths

I have changed some more default mapnik rendering settings, which I don't like for mountain maps:

  1. Unclasified tracks are shown with an ugly brown dashed line which confuses the viewer of the map. It looks like it is a totally different kind of way. 
  2. Some OSM contributers define mountain trails as paths, others define it as footways. Therefore both get the same rendering style. I think in the mountains all trails are paths.
  3. In reality nobody can really assess the difference  between a T3 and T4 path, it's the same with tracks. Therefore all tracks and paths look the same, like on a ordinary topografic map.
  4. The black and white dashed lines for tracks looked to strong on zoom level 13 and 14. A dark grey without white looks more harmonic.
At the moment, you can see the difference near Unterammergau in Bavaria. On the left side of the street 'B23' is the new rendering style, on the right side is the old rendering style.