Imagination is more important than knowledge

Albert Tollkuci Blog

  • Join Us on Facebook!
  • Follow Us on Twitter!
  • LinkedIn
  • Subcribe to Our RSS Feed

Importing your servers to Amazon EC2

As more and more infrastructure is moving to the cloud, the need to make the migration as painless as possible has increased. Typically there are three scenarios:

  • Moving a physical machine to the cloud
  • Moving an virtual machine from local infrastructure to the cloud
  • Moving a machine from one cloud provider to another

Depending on the scenario above, the operating system and on the cloud provider there are different options. Recently I had to migrate some machines (Windows and Linux) from Microsoft Azure to Amazon EC2. Fortunately Amazon EC2 provides import/export functionality (see https://aws.amazon.com/ec2/vm-import/). Unfortunately it has limitations on what it can import and there's no straightforward way to import directly from Microsoft Azure. Since I did go through the pain of doing it, I'm sharing below the necessary steps:

  1. The first thing you have to do in any scenario is to somehow create a image of the existing machine. If your machine is in an existing local virtual infrastructure (such as HyperV or VMWare) that's easy. If not, you have to look for alternatives and depending on your OS they are:
    1. In Windows, you can use VMWare Converter (download from http://www.vmware.com/products/converter.html) to create an image from a machine running anywhere. Just install it in the machine that you want to create an image and follow the steps. IMPORTANT: In case of importing to Amazon EC2 you should select only the OS disk, because it's not possible to import a machine with several disks. You can import the remaining disks after the machine is running in EC.
    2. In Linux, there are several tools, but you can stick with dd builtin utility. You have to do a image of the operating system disk (careful, disk not partition). If OS disk is /dev/sda the command would be:
      dd if=/dev/sda of=/backups/os-sda.img
  2. After you have the image, next step would be to make it compatible with Amazon EC2 import utility. According to the documentation (see http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#prerequisites-image) it supports OVA, VMDK, VHD and RAW format. In theory the image created in step one should work, but since internally there are different options for VMDK or RAW format, they  do not work. What I found out to work was converting them as below:
    1. In Windows convert the VMDK file to VHD. You can use StarWind V2V Image Converter (see https://www.starwindsoftware.com/converter) to convert from VMDK to VHD. After installing it follow the wizard and you will have your image in VHD format.
    2. In Linux convert the RAW file to VMDK. Qemu tools (see http://www.qemu-project.org/) will do the job. After installing them, run
      qemu-img convert -pO vmdk /backups/os-sda.img /backups/os-sda.vmdk
      When it is finished you can transfer the vmdk file in a Windows machine and there you can use the StarWind utility to create a VHD image ready to import in Amazon EC2.
  3. Next step is to actually import the image. For this first you have to install and configure Amazon EC2 API Tools (from https://aws.amazon.com/items/351?externalID=351) . IMPORTANT: Do not confuse them with AWS CLI Tools (https://aws.amazon.com/cli/). Documentation from reference (at https://awsdocs.s3.amazonaws.com/EC2/latest/ec2-clt.pdf) is very straightforward. After you have setup the EC2 API Tools, the command to import is:
    ec2-import-instance "D:\temp\os.vhd" -f VHD -t t2.2xlarge -a x86_64 -b test -o access_key -w secret_key
    The meaning of the parameters is:
    1. -f VHD - Image format is VHD
    2. -t t2.exlarge - Type of instance to create in EC2 is t2.2xlarge
    3. -a x86_64 - Architecture is 64 bit. In case of 32 bit, you should use x86
    4. -b test - Bucket where the image will be imported is named test. You have to create a bucket first in S3 and use that name here.
    5. - o access_key and -w secret_key are the access key and secret key to access your AWS account.
    If the import is successful, you will get a message similar to:
    Average speed was 74.650 MBps
    The disk image for import-i-fh2e5d1c has been uploaded to Amazon S3
    where it is being converted into an EC2 instance. You may monitor the
    progress of this task by running ec2-describe-conversion-tasks. When
    the task is completed, you may use ec2-delete-disk-image to remove the
    image from S3.
    As explained, by using:
    ec2-describe-conversion-tasks
    you can check the progress of conversion. After it is finished, you will see the instance in EC2 console and you can work with it, the same as with other instances.

As you can see there are a few steps to follow, but still it's a better way that to configure everything from scratch, especially if you're importing machines with a lot of configuration details. As a final note, the above will work even if you're running Microsoft SQL Server with databases in different drives (as I did). After starting SQL Server some databases may have issues and you can recover them or use a backup to restore. In any case the final step would be to use xSQL excellent Schema and Data Compare utility to synchronize the databases in new machine with the old one.

Using Varnish Cache with ASP.NET Websites

Some time ago a friend introduced me to Varnish Cache, an excellent caching HTTP reverse proxy. In simpler terms, it's a piece of software between the browser and the back-end server that intercepts (and potentially cache) every request made to the server. It runs in Linux, but that should not stop ASP.NET developers using it. You'll need some basic Linux skills, but with Google's help it's easily doable.

Recently, I moved a few sites to use it and there's a very noticeable performance improvement. Along the road, I learned some things which I'm sharing here. If you're designing a new site and you expect some traffic (more than a few thousand hits/day), you should definitely design it with the intention of supporting a caching HTTP reverse proxy. However in most cases we have to maintain/improve existing sites and in these cases you have to dig dipper into Varnish and HTTP protocol details to get things working. Some things to keep in mind, especially if you're using ASP.NET in back-end are:

  1. You can use a single Varnish instance to server multiple sites. For this to work, first you define all the back-ends in your VCL file. Next in vcl_recv sub-routine, you set the correct back-end depending on the request (typically the host), with something along the lines:

    if(req.http.host == "example1.com")
      set req.backend_hint = "backend1";
    if(req.http.host == "example2.com")
      set req.backend_hint = "backend2";
  2. In anything, but very simple configurations, you have to learn a bit of VCL language and to use varnishlog. VCL is pretty straighforward for a developer, but varnishlog is your friend to debug it. It will show all the details of requests and responses going through varnish and at first it may look difficult to trace the details. However if you combine it with std.log() and grep you get useful info. To use std.log() you have to import the std module by using import statement in top of your VCL file. Anything you log using std.log() will go to varnishlog along with everything else that's being logged by default. To distinguish your logs, you can use some special string in the beginning, for example
    std.log("AT DEBUG - Request for...")
     and then using grep to get only your debug messages 
    varnishlog | grep "AT DEBUG"

    Another helpful tip is to debug only request from your machine. For this you can test client.ip inside vcl_recv, like:
    if(client.ip == "185.158.1.35") {
             req.x-at-debug = "1";
            std.log("AT DEBUG - recv URL: " + req.url + ". Cookies: '" + req.http.Cookie + "'");
          }

    At the same time, I'm setting a new header (x-at-debug), so I can test against it in other routines (for example vcl_backend_response).
  3. One of the biggest issues you're going to face with existing websites is handling of user specific content. Most websites have some section where users can login and access some additional content, manage they profile, add items to shopping cart, etc. You certainly don't want the shopping cart of one user to mix up with another user. Without varnish we take it as a given the session state and use it to manage any kind of user specific content. However behind the scenes, session state is made possible by cookies and varnish and cookies are not best friends! To understand this, you have to read a bit more about hashing and how Varnish store the cache internally. By default Varnish will generate the key based on the host/url combination, so cookies are ignored. While it's relatively easy to include cookies in hash, that's a bad idea. In a simple request to one of the websites I tested, the request cookie field contains the following among others: 
    _gat=1; __utmt=1; __asc=2990a40415733b8f022836a9f7f; __auc=e3955e251562dd1302a38b507f9; _ga=GA1.2.1395830573.1469647499; __utma=1.1395830573.1469647499.1474034735.1474041546.90; __utmb=1.4.9.1474041549617; __utmc=1; __utmz=1.1473756384.78.8.utmcsr=facebook.com|utmccn=(referral)|utmcmd=referral|utmcct=/
    As you can see, these are third parties cookies from Google Analytics, Facebook, etc. Chances are the users will have similar cookies. If you hash all of them, you end up with different versions for each user and pretty much Varnish will be useless! The solution is to include cookies in hash, but to be very careful of what cookies you will use. The place to do this is in vcl_recv and you'll have do some work with cookies. The approach I have used, is to have a "special" cookie any time a user is logged on (for example "__user=1". Inside vcl_recv I do a test for this cookie and if found, return pass, meaning do not cache it:
    if(req.http.Cookie ~ "__user=1")
     {
       # AT: Add extra header so we do not strip any cookie in backend response
       set req.http.x-keep-cookies = "1";
       return (pass);
     }
    Also, as you can see in the comment, I add an additional header "x-keep-cookies", so that I can do the same test in vcl_backend_response:
    if(bereq.http.x-keep-cookies == "1")
      {
        #AT: We should pass the response back to client as it is
        return (deliver);
      }

     For all other users, I strip all cookies using unset req.http.Cookie in vcl_recv and unset beresp.http.set-cookie in vcl_backend_response. This is all fine except if you're using session state for not logged in user. Next item tackles this issue.

  4. Typically in a large website, session state is used heavily, not only for logged in users, but for all users. For example you can keep shopping cart items in session before the user has logged in, or you may keep a flag if a user is shown a special offer the first time he gets in the site. Again, this works fine without Varnish, but will break with the above configuration. The reason is that ASP.NET session cookie (the same applies to PHP and other technologies) will get removed and the users will get every time a new ASP.NET session. To work around this issue, you should not use session state for users that are not logged in. Instead you have to rely on cookies to track the same thing. To make things easier, you should name all cookies that you want to keep in Varnish using the same prefix, for example "__at" and add logic in your VCL file to keep these cookies both in request and response. In vcl_recv, you can do the trick with some regular expression:
    set req.http.Cookie = ";" + req.http.Cookie;
       set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";");
       set req.http.Cookie = regsuball(req.http.Cookie, ";(__at.*)=", "; \1=");
       set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", "");
       set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", "");

    Things get more complicated in vcl_backend_response. When you set a cookie from your backend (using Response.Cookies.Add() in ASP.NET), it is translated to a Set-Cookie HTTP header. If you set a few cookies you'll have several Set-Cookie headers and Varnish will give you only the first one when you access beresp.http.set-cookie (a flaw in my opinion!). I spent a lot of time around this issue, until I found it. The solution is to use Varnish Modules, specifically the header module. After you import the module, it's easy to remove all Set-Cookie headers, except those starting with "__at" with something like:
    header.remove(beresp.http.set-cookie,"^(?!(__at.*=))");
  5. Another heavily used method in ASP.NET is Response.Redirect(). It's a very simple method that will redirect the user to a new page. Behind the scenes it's translated into a 302 HTTP response header. However if you have some logic behind, for example to redirect users to a mobile site, if they are coming from a mobile device, it will mess up with Varnish. The reason is that even a 302 response will get cached by Varnish and next user coming from desktop may get redirect to the mobile version. There are two options to solve it:
    1. Do not cache 302 response, maybe cache only 200 response.
    2. Do not use Reponse.Redirect() with logic behind. In the above scenario is better to redirect the users in the client side (by the way there's a excellent JS library here). I would prefer this second option.
  6. For performance reasons, Varnish does not support HTTPS, so you can only cache content going through HTTP. So what about an e-commerce site that has both normal and secure content? If you have the secure content in a separate sub-domain, it's easy to use varnish for the public part and leave the rest unchanged. However if that's not possible, there's an alternative:
    1. Set Varnish to listen only to HTTP protocol (port 80)
    2. Install nginx and use it as a reverse proxy to listen only for HTTPS (port 443). Configure the SSL certificates in config file using ssl_certificate and ssl_certificate_key directives and use proxy_pass directive to pass the request to the backend server. The config file for the site will look something like:
          listen 443 ssl default_server;
          listen [::]:443 ssl default_server;
          ssl_certificate           /etc/ssl/certs/example.crt;
          ssl_certificate_key       /etc/ssl/private/example-keyfile.key;
      
          ssl on;
          ssl_session_cache  builtin:1000  shared:SSL:10m;
          ssl_protocols  TLSv1 TLSv1.1 TLSv1.2;
          ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
          ssl_prefer_server_ciphers on;
      
           server_name www.example.com;
      
           location / {
      
               proxy_set_header        Host $host;
               proxy_set_header        X-Real-IP $remote_addr;
               proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
               proxy_set_header        X-Forwarded-Proto $scheme;
      
               # Fix the .It appears that your reverse proxy set up is broken" error.
               proxy_pass              https://backend.example.com;
               proxy_read_timeout      90;
      
               proxy_redirect      https://backend.example.com https://www.example.com;
             }
      
      
      If you have set a password for your key file (as you should), you have to save it a a file and use "ssl_password_file /etc/keys/global.pass" directive so nginx can use it.
  7. After having Varnish running you'll have to check the status and maintain it. Some useful recommendations are:

    1. Check how Varnish is performing with varnishstat. Two most important indicators are cache_hit and cache_miss. You will want to have as many hits as possible, so if your numbers don't look good, check the config file.
    2. Another similar tool is varnishhist which will show a histogram of varnish requests. Hits are shown with "|", while misses with "#". The more "|" you have and the more in the left they are the better.
    3. To manage varnish, there's the varnishadm tool. It has a few command, but the one most used is to check the backend status. Run "varnishadm backend.list" to check the status of all your server backends.
    4. If you're used with checking IIS log files, the equivalent is varnishncsa tool. It will log all request and you can even customize it to include HIT/MISS details and check what URL's are getting misses and you may need to cache.

To summarize Varnish is a excellent and very fast tool. I would highly recommend it to anyone developing/managing public websites.

As a final note, Mattias Geniar templates are very useful to get started and helped me a lot. You have to check them out.

Migrating to Azure - Quirks & Tips

Recently I have moved a dozen websites and web apps to Azure. Some are small apps used by a few user with a few database tables, while some are public sites visited by tens of thousands of visitors every day with big databases (tens of GB). I've learned quite a bit during this process and below are some of the things to take into account, if you're going to do something similar:

  • Check carefully hard disk performance and max IOPS supported by each virtual machine. If you have a I/O intensive system, you'll need premium disks. Standard disk performance is poor, especially on basic machines (max 300 IOPS).
  • Bandwidth is expensive. 1 TB of outgoing traffic will cost you around 80 euro, so if you have a  public website you have to use a CDN for most resources (especially images). Otherwise you're going to pay a lot just for the bandwidth.
  • There's no snapshot functionality for VM. You have to manually manage backups (using Windows Server Backup for example). Microsoft should have this a priority.
  • For databases Basic and Standard options are useless except for very small apps. If you have more than 4-5 databases look at Elastic Pool. They have a better performance and are cheaper (if you have more than a few databases).

The process itself of moving websites it's pretty standard, while moving databases it's another story. First thing you'll find out is that SQL Server for Azure does not support restore functionality! So how do you move database to Azure? If you check the official Microsoft documentation, your options are:

  1. SSMS Migration Wizard, works for small databases as pointed out by Microsoft itself
  2. Export/Import through BACPAC format, which is cumbersome and works only for small and medium databases. You have to export from SSMS, upload it to a blob storage (standard, not premium) and then import it from there. 
  3. Combination of BACPAC (for schema) and BCP (for data) which gets complicated.

Fortunately I didn't have to go through any of them. I have used xSQL tools for database comparison and synchronization for a few years now and they are the perfect option to migrate your databases to Azure. The process is straightforward:

  1. Create a empty database in Azure.
  2. Use xSQL Schema Compare, to compare the schema of your existing database with the new empty database in Azure. In comparison options you can fine tune the details. For example I do not synchronize users and logins, because I prefer to manually check the security stuff. After comparison it will generate the synchronization script that you can execute directly in Azure and your new database will have the same schema as the existing one.
  3. Use xSQL Data Compare to compare the data. Since both databases have the same schema, it will map all tables correctly (The exception is if you have tables without a primary key, which you shouldn't! Still, if for some reason you have one, you can use custom keys to synchronize them as well) and generate the synchronization script. If the database is large, the script will be large as well, but it will take care of executing it properly. I had some databases in the range of few GB and it worked very well.

In addition to working well, this approach has another very important benefit. If you're moving medium/big websites, it's unlikely that migration will be done in one step. Most likely it will take weeks/month to complete and you have to synchronize the databases continuously. You just have to run schema compare first to move any potential schema changes and then data compare to synchronize the data. If you expect to perform these steps many times, you may even use the command line versions to automate it and sync everything with just one command.

 In Overall, Azure still has some way to go, but it's already a pretty solid platform. Some things, such as lack of database restore, are surprising, but fortunately there are good alternatives out there.

Disclaimer: I've known the guys working with xSQL since a long time, but the above is a post from a happy customer. They have really great products for a very good price.

Welcome to BlogEngine.NET

If you see this post it means that BlogEngine.NET is running and the hard part of creating your own blog is done. There is only a few things left to do.

Write Permissions

To be able to log in, write posts and customize blog, you need to enable write permissions on the App_Data and Custom folders. If your blog is hosted at a hosting provider, you can either log into your account’s admin page or call the support.

If you wish to use a database to store your blog data, we still encourage you to enable this write access for an images you may wish to store for your blog posts.  If you are interested in using Microsoft SQL Server, MySQL, SQL CE, or other databases, please see the BlogEngine docs to get started.

Security

When you`ve got write permissions set, you need to change the username and password. Find the sign-in link located either at the bottom or top of the page depending on your current theme and click it. Now enter "admin" in both the username and password fields and click the button. You will now see an admin menu appear. It has a link to the "Users" admin page. From there you can change password, create new users and set roles and permissions. Passwords are hashed by default so you better configure email in settings for password recovery to work or learn how to do it manually.

Configuration and Profile

Now that you have your blog secured, take a look through the settings and give your new blog a title.  BlogEngine.NET is set up to take full advantage of many semantic formats and technologies such as FOAF, SIOC and APML. It means that the content stored in your BlogEngine.NET installation will be fully portable and auto-discoverable.  Be sure to fill in your author profile to take better advantage of this.

Themes and Plugins

One last thing to consider is customizing the look and behavior of your blog. We have themes and plugins available right out of the box. You can install more right from admin panel under Custom.

On the web

You can find news about BlogEngine.NET on the official website. For tutorials, documentation, tips and tricks visit our docs site. The ongoing development of BlogEngine.NET can be followed at Github. You can also subscribe to our Youtube channel.

Good luck and happy writing.

The BlogEngine.NET team

Is Bayern destroying its identity?!

This transfer window Bayern has been pretty active in both directions. Until now the transfers are:

In

  1. Joshua Kimmich - from Stuttgart
  2. Sven Ulreich - from Stuttgart
  3. Douglas Kosta - from Shakhtar Donetsk
  4. Arturo Vidal - from Juventus
  5. Pierre–Emile Højbjerg - back from loan Augsburg
  6. Jan Kirchhoff - back from loan Schalke 04
  7. Julian Green - back from loan Hamburger SV

Out

  1. Bastian Schweinsteiger - to Manchester United
  2. Mitchell Weiser - to Hertha BSC
  3. Claudio Pizarro - retired
  4. Pepe Reina - to Napoli
  5. Rico Strieder - to FC Utrecht

By far the most controversial transfer is Schweini going to ManU after 17 years with Bayern. There are fractions of the fans not being happy at all about the transfer, accusing Guardiola and Rummenigge of destroying the Bayern identity.  Peter Neururer (ex Bochum coach) took it one step further (read more). I must confess that I'm the same camp also and think that a strong German/Bavarian core it's a must for FC Bayern to be successful.

If you look at the full squad of 27 players, 12 are Germans which it's not too bad (44%). However looking more closely, if we take out the three goalkeepers, from 24 field players, only 9 are Germans (37.5%). Even more worrying it's the picture of the starting eleven, which very likely has only Neuer, Lahm, Boateng and Muller as starter (36%).

Whenever Bayern has been successful, it had a very strong German core which was the core of the National Team as well:

  • 1974, 75, 76 - Maier, Beckenbauer, Breitner, Schwarzenbeck, Hoeneß, Roth, Müller
  • 1999, 2000, 2001 - Kahn, Babbel, Helmer, Linke, Matthaus, Basler, Effenberg, Jeremies, Scholl
  • 2012, 13, 14 - Neuer, Lahm, Boateng, Badstuber, Kroos, Schweinsteiger, Gotze, Muller

I'm afraid that this full latinization of Guardiola (Dante, Rafinha, Costa of Brasil; Bernat, Thiago, Martinez, Alonso of Spain; Vidal of Chille) will contribute to more German/Bavarian players leaving. There are rumours that it was one of the reasons Schweini left, and Muller has voiced his concerns also (read here). I hope I'm wrong, but my prediction is that although Bayern can/will win the Bundesliga, they will not get far in Champions League.

 

Schumacher 1996 vs Vettel 2015

Since Vettel announced that he will join Ferrari lots of comparison have been made between him and Schumacher. The similarities are quite a few:

  • Both of them are Germans and World Champions
  • Schumacher joined Ferrari while they were in a big crisis and Vettel joined them in a similar situation
  • Ferrari restructured the team back in '96 and is doing the same now
  • etc

Now that half of the season is past we can try and do a comparison between Schumacher '96 and Vettel '15. This isn't very straightforward because the cars are not the same, the competition is not the same, etc. but I will try to do an objective analysis.

First we'll try and compare the Ferrari of '96 against Williams '96 which was the class of the field and at the same time Ferrari '15 against Mercedes '15. To do this I will compare the results of the second driver in Ferrari in '96 and in '15, Irvine and Raikkonen.

 

Table 1, 1996 results

 

Table 2, 2015 results

In 1996 the average starting position of Irvine was 7.6, while in 2015 the average starting position of Raikkonen is 5.78. In the race Irvine has an average position of 5.2, while Raikkonen 4.71. The difference in both cases is small and taking into account that Raikkonen is a better driver (most people will agree) and World Champion, we can say that on pure performance Ferrari of '96 and of '15 compare similarly to the class of the field (but the Ferrari of '15 is much more reliable).

Now let's compare Schumacher and Vettel performance against the class of the field. Schumacher average starting position was 2.5 with an average difference from pole of 0.548, while Vettel's average starting position is 3.22 with an average difference from pole of 0.747. Additionally Schumacher had 3 pole positions, while Vettel has none. In defence of Vettel, probably the Mercedes of '15 is stronger on one lap pace that Williams of '96, but still it shows that Schumacher is one step ahead. If we compare the race results Schumacher has an average position of 2, while Vettel 3. On the other hand Vettel has 2 victories, while Schumacher only 1 (but he has 5 retirements to Vettel's none). Still the balance is slightly in favor of Schumacher.

Next comparison is between Schumacher '96 and Vettel '15 against their teammates. Schumacher pretty much has destroyed Irvine (qualifying 5.1 positions ahead with an average difference of 0.852), but also Vettel has outperformed clearly Raikkonen (qualifying 2.56 positions ahead with an average difference of 0.353). In both cases their teammates out-qualified them only once (excluding car troubles or rain). In the race Schumacher is 3.2 positions ahead of Irvine, while Vettel 1.71 ahead of Raikkonen.

Last comparison we'll do is the championship position of both after 10 races. In '96 Schumacher was third on 26 points (Hill had 63, Villeneuve 48), while in '15 Vettel is also third on 160 points (Hamilton has 202, Rosberg 181). Vettel is much closer and is still fighting for the championship, but this is mostly because Ferrari of '96 was much more unreliable.

As a conclusion, Schumacher of '96 was really exceptional and is very hard for anyone to compete against him. However the comparison clearly shows that Vettel is doing a great job against a better teammate and better opposition (most will agree that Hamilton/Rosberg are stronger that Hill/Villeneuve).

So well done to Vettel and keep pushing :)

World Champions for the fourth time!

After three unsuccessful tries (2002, 2006, 2010) when Germany lost in the final hurdle (Brasil, Italy, Spain) finally they achieved the most important, glorious reward in Football, THEY ARE WORLD CHAMPIONS for the fourth time in the history! It was a well deserved victory culminating the hard work started by DFB in the beginning of 2000 when Germany hit the lowest point. It has proved once more than with hard work, persistence and a high team spirit you can achieve the maximum.

There's already a lot of information on the web about the final, the statistics, etc, so I'm just putting some links below:

After the World Cup, there are two important news related with "die Mannschaft": First that Toni Kroos moved to Real Madrid, which I believe it will be a huge loss to Bayern. Kroos has really matured in a World Class level.

Second and most important, the captain Philipp Lahm announced his retirement from "die Mannschaft". It will be a huge loss to the team, but it's the perfect choice to retire after winning the biggest price. So I would like to say thank you to Lahm for 10 great years in the National Team and wish him all the best!

World Cup final...repeat of 1990?

After 62 mostly spectacular matches, we now have the finalist of the 20th World Cup, Germany vs Argentina. To most people born before the 80s it will ring bells of the two classic finals in 1986 in Mexico and 1990 in Italy. I was only 6 years when Germany was beaten by Argentina in 1986, but I still remember it and it's one of my first memories. However this time, I hope history will repeat 1990 instead :) Getting back 24 years ago there are quite a lot of similarities:

  • In 1990 Germany started the tournament by demolishing Yugoslavia 4-1, while in 2014 they demolished Portugal 4-0.
  • In 1990 Germany was for the third time in a row in the Final (first time in history), while in 2014 they are for the fourth time in a row in the semi-finals (first time in history).
  • In 1990 Argentina was completely depended on Maradona, in 2014 they are completely depended on Messi.
  • In 1990 Argentina was missing Caniggia in the final, in 2014 they will most probably miss Di Maria.
  • Finally, Germany won the World Cup in '54, '74 so they have to win it in '14 :)

So Good luck to Germany to add that missing fourth star!

Germany last group match against USA. Strategy or not...

Tonight at 18:00 CET Germany plays the last game in the group stage against USA in Recife. Germany is almost guaranteed to qualify, unless they loose by several goals and at the same time Ghana or Portugal win by a big margin (very unlikely). On the other hand depending on the tonight result Germany can "choose" the opponent for the next game as well as the "road" to the final. This raise the question: Should they play for the win, or should strategy come into play and calculate the best "route" to the final. First let's see possible opponents:

If Germany draws or win, they will qualify at the top of the group and the "leg" includes:

  • Brasil
  • Chille
  • Uruguay
  • Columbia
  • France
  • Nigeria
  • Algeria/Russia

On the other hand, if they loose, they most likely qualify and the "leg" includes:

  • Netherlands
  • Mexico
  • Costa Rica
  • Greece
  • Argentina
  • Switzerland
  • most likely Belgium

On paper the second group of teams looks easier. However to have a better picture let's analyze most likely team to face in each round, as well as the place/time where the game will be played.

In the first scenario, we have:

  • Last 16: 30 June, Against Algeria/Russia in Porto Alegre, 17:00 local time. Rainy, Temperature 16°, Real Feel 13°, Humidity 80%
  • Quarter final: 4 July, Most likely against France in Rio de Janeiro, 13:00 local time. Cloudy, Temperature 26°, Real Feel 28°, Humidity 68%
  • Semi final: 8 July, Most likely against Brasil in Belo Horizonte, 17:00 local time. Cloudy, Temperature 26°, Real Feel 25°, Humidity 40%

In the second scenario, we have:

  • Last 16: 1 July, Against Belgium in Salvador, 17:00 local time. Cloudy, Temperature 25°, Real Feel 26°, Humidity 79%
  • Quarter final: 5 July, Most likely against Argentina in Brasilia, 13:00 local time. Sunny, Temperature 27°, Real Feel 27°, Humidity 31%
  • Semi final: 9 July, Most likely against Netherlands/Mexico in Sao Paulo, 17:00 local time. Cloudy, Temperature 16°, Real Feel 15°, Humidity 66%

With this comparison, the differences between the two routes are small. If we take into account the positive energy of a win then the best options is first one. So no strategy this time...just go out and play for the win :)

 

 

FIFA World Cup is starting...what are the Germany chances?

Today, the 20th FIFA World Cup will start in Brazil. As a German fan, it's not acceptable to not post something about that :)

In the preliminary round Germany won all but one match (that unbelievable 4-4 draw with Sweden) and as always are one of the favorites. The friendly matches weren't that promising, but that has always been the case with Germany. However what's a problem is the injury of Marko Reus, forcing him to miss the World Cup. Undoubtedly he's one of the best attacking players of Germany and his absence will be missed, but hopefully the others will step up. So the squad that Low, took in Brasil is:

Goalkeepers: Neuer, Weedenfeller, Zieler

Defence: Grosskreutz, Hoewedes, Hummels, Lahm, Mertesacker, Boateng, Mustafi, Durm

Midfield: Ginter, Khedira, Schweinsteiger, Oezil, Schuerrle, Podolski, Muller, Draxler, Kroos, Goetze, Kramer

Attack: Klose

There are still some uncertainties about who will be the starting eleven, considering the injuries of Neuer, Schweinsteiger, Khedira, as well as the age of Klose. Based on the latest matches, I think Low will start with:

Neuer - Boateng, Mertesacker, Hummels, Howedes - Lahm, Schweinsteiger - Ozil, Kroos, Muller - Goetze

 while I would like:

Neuer - Howedes , Mertesacker, Hummels, Durm - Lahm, Schweinsteiger - Schuerrle, Goetze, Muller - Klose

We have to wait a few more days to see the starting eleven against Portugal.

So what about my prediction for the tournament? For me it's time that an European team wins it in South America and it has to be Germany. In the last 4 tournaments Germany was three times in the semi-finals and once in the final and without some strange decisions by Low should have won it once! They have a very good squad and now it's time to deliver. Regarding the opposition, Brasil has a strong squad as always, but nothing fearsome; Argentina has Messi, but the rest are mediocre; Spain isn't shining anymore and France without Ribery it's not scary; Italy has no star, but if they get past the group stage, you never know :). Still me prediction is:

Group stage

- Win 3-1 against Portugal

- Win 1-0 against Ghana

- Draw 1-1 against USA

Round of 16

- As winner of group they will play the runner up of Group H, which I predict will be Russia. Result 2-0

Quarter-finals

- The opponent will be the winner of the match between winner of group E and runner up of group F. My prediction is that will be Switzerland and the result 4-1.

Semi-finals

- At this stage the opponent will be the winner of groups A or C, or runner up of groups B or D. It has to be Brasil and Germany will win on penalties after a 2-2 draw.

Final

- Uruguay will be the surprise team and will reach the final having beaten Spain and Argentina along the way. Germany wins the final 1-0 :)

Enjoy the World Cup and may Germany win it!