Tutorial: Cel-Shading With libGDX and OpenGL ES 2.0 using Post-Processing

1. Introduction

Sometimes photo-realism doesn’t suit the general style of your 3D game or maybe you simply don’t have the resources for the creation of photo-realistic assets. In this case you can use cel-shading to give your game a cartoony look. Prominent examples of this visual style are for example „The Legend of Zelda: The Wind Waker“, „The Walking Dead“ or „Borderlands“.

Cel-shading result

Cel-shading result. Click to enlarge.

The general idea behind cel-shading is that light/shadow transitions are not continous but have very few discrete steps. Cel-shading itself does not imply the rendering of outlines but they are almost always used in combination.

This tutorial is work in progress. Some aspects might not yet be explained clearly enough and it might as well contain errors. Feel free to ask questions and leave constructive criticism.

 

2. Methods

In order to render a 3D scene in cel-shading we use a post-processing approach. Post-processing means that we achieve the effect by applying image operations on an image of the rendered scene.

  • For the shading we render the scene with a common lighting model like the Phong lighting model and discretize the colors of the resulting image to obtain the characteristical patches of equal lighting.
  • The outlines are calculated from the scene’s depth map using a common edge detection approach. The two results are combined to obtain the final image.

 

2.1. Frame Buffers and Shaders

In order to be able to apply image operations on our scene we need to render it to a frame buffer. A frame buffer is basically a texture, which can also be used as a render target. We can render something on it and use it later for another render pass. Luckily, libGDX provides us with a convenience class for it, so we don’t have to worry about the initialization.

FrameBuffer buffer = new FrameBuffer(Pixmap.Format.RGBA8888
    , Gdx.graphics.getWidth()
    , Gdx.graphics.getHeight()
    , true);

Frame buffers are used as an input for a shader. A shader can be loaded using the ShaderProgram convenience class:

toonifyshader = new ShaderProgram(
    Gdx.files.internal("shader/toonify/toonify.vertex.glsl")
    , Gdx.files.internal("shader/toonify/toonify.fragment.glsl"));

 

2.2 Shading

The shading is achieved in two passes: first the scene is rendered to a frame buffer with a common lighting model. In the next step, we bind the frame buffer as a texture and use a shader to discretize the colors. The result is then rendered to the screen. Let’s have a look at the code:

// Initialization
ModelBatch modelBatch = new ModelBatch();
FrameBuffer buffer1 = new FrameBuffer(Pixmap.Format.RGBA8888
    , Gdx.graphics.getWidth()
    , Gdx.graphics.getHeight()
    , true);
// In the render method
FrameBuffer dest = buffer1;
dest.begin();{
    Gdx.gl.glCullFace(GL20.GL_BACK);
    Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
    Gdx.gl.glDepthFunc(GL20.GL_LEQUAL);
    Gdx.gl.glDepthMask(true);
    // Render your scene here
    // I use an entity system that contains all my ModelInstances
    entitysystem.render(modelBatch, camera, lights);
}
dest.end();

We initialize modelBatch with the default constructor. This initializes the default shader that renders everything using a Blinn-Phong lighting model. We set our frame buffer as the current render target by calling begin(). Then we render whatever our scene contains and finish by calling end() on the frame buffer. After this step the frame buffer contains a rendering of our scene:

Scene using Blinn-Phong lighting.

Scene using Blinn-Phong lighting. Click to enlarge.

We can now use this frame buffer as an input for a toon shader, which discretizes the lighting, and render everything on the screen. Before we dig into the shader code, let’s have a look at the host code:

// Initialization
toonShader = new ShaderProgram(
    Gdx.files.internal("shader/toonify/toonify.vertex.glsl")
    , Gdx.files.internal("shader/toonify/toonify.fragment.glsl"));
fullScreenQuad = createFullScreenQuad();
// In render method
src = dest;
dest = buffer1;
src.getColorBufferTexture().bind();
{
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT
        | GL20.GL_DEPTH_BUFFER_BIT);
    toonShader.begin();{
        fullScreenQuad.render(toonShader, GL20.GL_TRIANGLE_STRIP
            , 0, 4);
    }
    toonShader.end();
}

We create the shader program toonShader and initialize by loading the respective shader code. We bind the frame buffer texture as an input and render the previously acquired image to the screen using the toon shader. fullScreenQuad is a rectangular mesh that exactly covers the screen size. We can create such a mesh using the following code:

public Mesh createFullScreenQuad(){
    float[] verts = new float[20];
    int i = 0;
    verts[i++] = -1.f; // x1
    verts[i++] = -1.f; // y1
    verts[i++] =  0.f; // u1
    verts[i++] =  0.f; // v1
    verts[i++] =  1.f; // x2
    verts[i++] = -1.f; // y2
    verts[i++] =  1.f; // u2
    verts[i++] =  0.f; // v2
    verts[i++] =  1.f; // x3
    verts[i++] =  1.f; // y2
    verts[i++] =  1.f; // u3
    verts[i++] =  1.f; // v3
    verts[i++] = -1.f; // x4
    verts[i++] =  1.f; // y4
    verts[i++] =  0.f; // u4
    verts[i++] =  1.f; // v4
    Mesh tmpMesh = new Mesh(true, 4, 0
        , new VertexAttribute(Usage.Position, 2, "a_position")
        , new VertexAttribute(Usage.TextureCoordinates
            , 2, "a_texCoord0"));
    tmpMesh.setVertices(verts);
    return tmpMesh;
}

When we draw this mesh, our fragment shader is called for every pixel of the image. This enables us to apply a per-pixel image filter using a shader.

Pro tip: A quad is internally converted to two triangles and some of the fragments along the diagonal of the quad are processed twice due to batch processing of most GPUs. You can optimize this by using a triangle instead of a quad to cover the screen. You can use the coordinates (-1,-1), (-1,3), (3,-1) and the respective UV coordinates (0,0), (0,2), (2,0) to construct such a mesh. Thanks to kalle_h for the tip.

In the shader we read the color value of every pixel from the frame buffer texture and modify it as we wish. So what does the toon shader actually do? Here’s the code of the vertex shader:

#ifdef GL_ES
#define MED mediump
#else
#define MED
#endif

attribute vec4 a_position;
attribute vec2 a_texCoord0;
varying MED vec2 v_texCoord0;

void main(){
	v_texCoord0 = a_texCoord0;
	gl_Position = a_position;
}

The vertex shader simply passes the input to the fragment shader, nothing fancy. The real magic happens in the fragment shader:

#ifdef GL_ES
#define LOWP lowp
#define MED mediump
precision lowp float;
#else
#define LOWP
#define MED
#endif

uniform sampler2D u_texture;
varying MED vec2 v_texCoord0;

float toonify(in float intensity) {
    if (intensity > 0.8)
        return 1.0;
    else if (intensity > 0.5)
        return 0.8;
    else if (intensity > 0.25)
        return 0.3;
    else
        return 0.1;
}

void main(){
	vec4 color = texture2D(u_texture, v_texCoord0);
	float factor = toonify(max(color.r, max(color.g, color.b)));
	gl_FragColor = vec4(factor*color.rgb, color.a);
}

First we find the color channel with the highest value. This value is discretized by the toonify function and saved as a factor. Finally, the original color is multiplied by this factor and saved as the shader result. Shading done!

Scene rendering toonified. Click to enlarge.

Scene rendering toonified. Click to enlarge.

 

2.3 Outlines

Using the shading looks quite cool already but you probably want some outlines for the perfect comic book look! It turns out that drawing outlines at places where the depth buffer has big jumps looks pretty good. So the first thing we need is a depth map of the scene. libGDX already contains a depth shader but it uses front face culling and we need to cull the back faces. We need to copy the DepthShader and DepthShaderProvider classes from the libGDX source and rename them (e.g. FrontFaceDepthShader and FrontFaceDepthShaderProvider). Then we comment out the culling part or change it to back face culling:

DefaultShader.defaultCullFace = GL10.GL_BACK;

Additionally, we also need to write our own version of the depth shader:

attribute vec3 a_position;
uniform mat4 u_projViewWorldTrans;

#ifdef boneWeight0Flag
#define boneWeightsFlag
attribute vec2 a_boneWeight0;
#endif //boneWeight0Flag

#ifdef boneWeight1Flag
#ifndef boneWeightsFlag
#define boneWeightsFlag
#endif
attribute vec2 a_boneWeight1;
#endif //boneWeight1Flag

#ifdef boneWeight2Flag
#ifndef boneWeightsFlag
#define boneWeightsFlag
#endif
attribute vec2 a_boneWeight2;
#endif //boneWeight2Flag

#ifdef boneWeight3Flag
#ifndef boneWeightsFlag
#define boneWeightsFlag
#endif
attribute vec2 a_boneWeight3;
#endif //boneWeight3Flag

#ifdef boneWeight4Flag
#ifndef boneWeightsFlag
#define boneWeightsFlag
#endif
attribute vec2 a_boneWeight4;
#endif //boneWeight4Flag

#ifdef boneWeight5Flag
#ifndef boneWeightsFlag
#define boneWeightsFlag
#endif
attribute vec2 a_boneWeight5;
#endif //boneWeight5Flag

#ifdef boneWeight6Flag
#ifndef boneWeightsFlag
#define boneWeightsFlag
#endif
attribute vec2 a_boneWeight6;
#endif //boneWeight6Flag

#ifdef boneWeight7Flag
#ifndef boneWeightsFlag
#define boneWeightsFlag
#endif
attribute vec2 a_boneWeight7;
#endif //boneWeight7Flag

#if defined(numBones) && defined(boneWeightsFlag)
#if (numBones > 0)
#define skinningFlag
#endif
#endif

#if defined(numBones)
#if numBones > 0
uniform mat4 u_bones[numBones];
#endif //numBones
#endif

varying float v_depth;

void main(){
	#ifdef skinningFlag
		mat4 skinning = mat4(0.0);
		#ifdef boneWeight0Flag
			skinning += (a_boneWeight0.y) * u_bones[int(a_boneWeight0.x)];
		#endif //boneWeight0Flag
		#ifdef boneWeight1Flag
			skinning += (a_boneWeight1.y) * u_bones[int(a_boneWeight1.x)];
		#endif //boneWeight1Flag
		#ifdef boneWeight2Flag
			skinning += (a_boneWeight2.y) * u_bones[int(a_boneWeight2.x)];
		#endif //boneWeight2Flag
		#ifdef boneWeight3Flag
			skinning += (a_boneWeight3.y) * u_bones[int(a_boneWeight3.x)];
		#endif //boneWeight3Flag
		#ifdef boneWeight4Flag
			skinning += (a_boneWeight4.y) * u_bones[int(a_boneWeight4.x)];
		#endif //boneWeight4Flag
		#ifdef boneWeight5Flag
			skinning += (a_boneWeight5.y) * u_bones[int(a_boneWeight5.x)];
		#endif //boneWeight5Flag
		#ifdef boneWeight6Flag
			skinning += (a_boneWeight6.y) * u_bones[int(a_boneWeight6.x)];
		#endif //boneWeight6Flag
		#ifdef boneWeight7Flag
			skinning += (a_boneWeight7.y) * u_bones[int(a_boneWeight7.x)];
		#endif //boneWeight7Flag
	#endif //skinningFlag

	#ifdef skinningFlag
		vec4 pos = u_projViewWorldTrans * skinning * vec4(a_position, 1.0);
	#else
		vec4 pos = u_projViewWorldTrans * vec4(a_position, 1.0);
	#endif

	v_depth = (-pos.z-1.0) / 999.0;
	gl_Position = pos;
}

The important line is the calculation of v_depth:

v_depth = (-pos.z-1.0) / 999.0;

This takes the near and far values of the camera into account. In my case near is 1 and far 1000. The general formula is:

v_depth = (-pos.z-near) / (far-near);

Consider providing those values as uniform variables to make your code more generic. In the fragment shader the 32-bit float depth value is packed into a 32-bit RGBA vector. I’ll leave trying to understand the pack function as an exercise for you:

#ifdef GL_ES
#define LOWP lowp
#define MED mediump
#define HIGH highp
precision mediump float;
#else
#define MED
#define LOWP
#define HIGH
#endif

varying HIGH float v_depth;

vec4 pack_depth(const in float depth){
    const HIGH vec4 bit_shift =
        vec4(256.0*256.0*256.0, 256.0*256.0, 256.0, 1.0);
    const HIGH vec4 bit_mask  =
        vec4(0.0, 1.0/256.0, 1.0/256.0, 1.0/256.0);
    vec4 res = fract(depth * bit_shift);
    res -= res.xxyz * bit_mask;
    return res;
}

void main(){
    gl_FragColor = pack_depth(v_depth);
}

We are now able to initialize depthModelBatch properly:

depthshaderprovider = new FrontFaceDepthShaderProvider();
depthModelBatch = new ModelBatch(depthshaderprovider);

Using depthModelBatch we can now render a depth map of our scene to a frame buffer:

dest = depthframebuffer;
dest.begin();{
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT
        | GL20.GL_DEPTH_BUFFER_BIT);
    entitysystem.render(depthModelBatch, camera, lights);
}
dest.end();
Rendering of the packed depth values. Click to enlarge.

Rendering of the packed depth values. Click to enlarge.

The next step is to detect the edges in the depth map. I use a Laplace filter but you can also use other kernels. In the vertex shader we pass the pixel coordinate of the current pixel and its four nearest neighbors to the fragment shader:

#ifdef GL_ES
#define LOWP lowp
#define MED mediump
#define HIGH highp
precision mediump float;
#else
#define MED
#define LOWP
#define HIGH
#endif

attribute vec4 a_position;
attribute vec2 a_texCoord0;
uniform vec2 size;
varying MED vec2 v_texCoords0;
varying MED vec2 v_texCoords1;
varying MED vec2 v_texCoords2;
varying MED vec2 v_texCoords3;
varying MED vec2 v_texCoords4;

void main(){
    v_texCoords0 = a_texCoord0 + vec2(0.0, -1.0 / size.y);
    v_texCoords1 = a_texCoord0 + vec2(-1.0 / size.x, 0.0);
    v_texCoords2 = a_texCoord0 + vec2(0.0, 0.0);
    v_texCoords3 = a_texCoord0 + vec2(1.0 / size.x, 0.0);
    v_texCoords4 = a_texCoord0 + vec2(0.0, 1.0 / size.y);
    gl_Position = a_position;
}

We need the value of the current pixel and its four nearest neighbors. We read the five depth values from the texture. unpackDepth again performs some calculation magic in order to obtain the float value that was packed into the 32-Bit RGBA vector by the depth shader. Then the Laplace kernel is applied. The result is thresholded to get discrete outlines:

#ifdef GL_ES
#define LOWP lowp
#define MED mediump
precision lowp float;
#else
#define LOWP
#define MED
#endif

uniform sampler2D u_depthTexture;
varying MED vec2 v_texCoords0;
varying MED vec2 v_texCoords1;
varying MED vec2 v_texCoords2;
varying MED vec2 v_texCoords3;
varying MED vec2 v_texCoords4;

float unpack_depth(const in vec4 rgba_depth){
    const vec4 bit_shift =
        vec4(1.0/(256.0*256.0*256.0)
            , 1.0/(256.0*256.0)
            , 1.0/256.0
            , 1.0);
    float depth = dot(rgba_depth, bit_shift);
    return depth;
}

void main(){
    float depth =
        abs(unpack_depth(texture2D(u_depthTexture, v_texCoords0))
	+ unpack_depth(texture2D(u_depthTexture, v_texCoords1))
	- unpack_depth(4.0 * texture2D(u_depthTexture, v_texCoords2))
	+ unpack_depth(texture2D(u_depthTexture, v_texCoords3))
	+ unpack_depth(texture2D(u_depthTexture, v_texCoords4)));
    if(depth > 0.0004){
	gl_FragColor = vec4(0.0,0.0,0.0,1.0);
    }
    else{
	gl_FragColor = vec4(1.0,1.0,1.0,0.0);
    }
}

And this is how the outlines look:

Edge detection. Click to enlarge.

Edge detection. Click to enlarge.

 

2.4 Summing Up

The neccessary computation steps are:

  • Shades:
    1. Render the scene to a framebuffer.
    2. Render the framebuffer to the screen using the toon shader.
  • Outlines:
    1. Render a depth map of the scene to a framebuffer.
    2. Render the framebuffer to the screen using the laplace shader.

 

2.5 Additional Improvements

We’re basically done at this point. However, you might want to sacrifice some performance and increase the quality of the outlines, by using super-sampling and a median filter. Super-sampling is simply done by increasing the size of the frame buffers you use for the depth map and the edge detection, for example twice the screen size in every dimension:

bigbuffer = new FrameBuffer(Pixmap.Format.RGBA8888
    , Gdx.graphics.getWidth()*2
    , Gdx.graphics.getHeight()*2
    , true);

The median filter can remove lone and amplify clusters of pixels in the image. For every pixel its neighborhood is checked. If the pixel doesn’t have enough neighbors that are part of an outline it is removed. The following shader code achieves this:

#ifdef GL_ES
#define MED mediump
#else
#define MED
#endif

attribute vec4 a_position;
attribute vec2 a_texCoord0;
uniform vec2 size;
varying MED vec2 v_texCoords0;
varying MED vec2 v_texCoords1;
varying MED vec2 v_texCoords2;
varying MED vec2 v_texCoords3;
varying MED vec2 v_texCoords4;
varying MED vec2 v_texCoords5;
varying MED vec2 v_texCoords6;
varying MED vec2 v_texCoords7;
varying MED vec2 v_texCoords8;

void main()
{
    v_texCoords0 = a_texCoord0 + vec2(0.0 / size.x, -1.0 / size.y);
    v_texCoords1 = a_texCoord0 + vec2(-1.0 / size.x, 0.0 / size.y);
    v_texCoords2 = a_texCoord0 + vec2(0.0 / size.x, 0.0 / size.y);
    v_texCoords3 = a_texCoord0 + vec2(1.0 / size.x, 0.0 / size.y);
    v_texCoords4 = a_texCoord0 + vec2(0.0 / size.x, 1.0 / size.y);
    v_texCoords5 = a_texCoord0 + vec2(-1.0 / size.x, -1.0 / size.y);
    v_texCoords6 = a_texCoord0 + vec2(-1.0 / size.x, 1.0 / size.y);
    v_texCoords7 = a_texCoord0 + vec2(1.0 / size.x, -1.0 / size.y);
    v_texCoords8 = a_texCoord0 + vec2(1.0 / size.x, 1.0 / size.y);
    gl_Position = a_position;
}
#ifdef GL_ES
#define LOWP lowp
#define MED mediump
precision lowp float;
#else
#define LOWP
#define MED
#endif

uniform sampler2D u_texture;
varying MED vec2 v_texCoords0;
varying MED vec2 v_texCoords1;
varying MED vec2 v_texCoords2;
varying MED vec2 v_texCoords3;
varying MED vec2 v_texCoords4;
varying MED vec2 v_texCoords5;
varying MED vec2 v_texCoords6;
varying MED vec2 v_texCoords7;
varying MED vec2 v_texCoords8;
void main()
{
    float MED alpha = (texture2D(u_texture, v_texCoords0).a
        + texture2D(u_texture, v_texCoords1).a
        + texture2D(u_texture, v_texCoords2).a
	+ texture2D(u_texture, v_texCoords3).a
	+ texture2D(u_texture, v_texCoords4).a
	+ texture2D(u_texture, v_texCoords5).a
	+ texture2D(u_texture, v_texCoords6).a
	+ texture2D(u_texture, v_texCoords7).a
	+ texture2D(u_texture, v_texCoords8).a) / 9.0;
    if(alpha > 0.4){
        gl_FragColor = vec4(0.0,0.0,0.0, 1.0);
    }
    else{
        gl_FragColor = vec4(1.0,1.0,1.0,0.0);
    }
}

You can see the visual difference in the following two images (no super-sampling):

Edges after median filter. Click to enlarge.

Edges after median filter. Click to enlarge.

No median filter. Click to enlarge.

No median filter. Click to enlarge.

 

3. Result

Merging the shading and the outlines gives us the final cel-shading. I use a super-sampled version. Therefore the outlines appear thinner:

Final cel-shading result. Click to enlarge.

Final cel-shading result. Click to enlarge.

 

4. Discussion

Using all the techniques can kill your performance pretty quickly, so consider your target platform carefully. A good idea is to give the user an option to enable or disable super sampling and noise filtering or maybe even outlines in general. However, cel-shading often makes low-polygon models look better, so you might have the possibility to increase your performance by decreasing your polycount. Furthermore, there are some potential improvements to this methods, which can currently not be implemented due to limitations of OpenGL ES 2.0:

  • Using the depth buffer of the shading rendering pass: to my knowledge, it’s not supported in OpenGL ES 2.0 without using extensions. It would make the additional rendering of the scene obsolete and thus save much computation time.
  • Finding addtitional edges in the normal map of the scene: this would probably make a great increase in visual detail. The problem is that OpenGL ES 2.0 does not support passing uninterpolated data from vertex to fragment shader (flat varying). Therefore, you will always get an interpolated version of the face normal, which basically makes edge detection impossible.

 

5. Ackknowledgement

I would like to thank kalle_h and Xoppa from the libGDX community for helping me figuring out how to implement this technique.

Werbeanzeigen

About kbalentertainment

Marius Stärk works as a computer engineer in the broadcast industry. He spends his freetime being a rock musician and developing games.
Dieser Beitrag wurde unter Game Development, libGDX veröffentlicht. Setze ein Lesezeichen auf den Permalink.

10 Responses to Tutorial: Cel-Shading With libGDX and OpenGL ES 2.0 using Post-Processing

  1. titoasty sagt:

    thanks for you tutorial ! effect is really cool !!!

  2. Andreas sagt:

    Very nice tutorial! I have not tested to implement it yet but I will soon since I have been looking for some help on getting depthbuffer shaders to work in libgdx. You do not happen to have any source code available? I am also curious what technique you used to generate the ground?

    • Thanks! You should try to implement it from the tutorial and if you get stuck somewhere, I can help you out 😉 Btw, I don’t actually know if this tutorial is still up to date. The ground was modelled in Blender. It’s not procedural.

  3. Andreas sagt:

    Thanks for answer, I have tried to follow the tutorial part about the outline shader. Copied the DepthShader and DepthShaderProvider and changed to backface culling and changed the default vertex and fragment shaders to the ones you posted. Setting up framebuffers and a fullscreenquad (I got an illegalargumentexception from your code, had to change float[] verts = new float[20] to float[] verts = new float[16]) then in my render() I do this:

    dest = depthframebuffer;
    dest.begin();{
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT
    | GL20.GL_DEPTH_BUFFER_BIT);
    world.render(depthBatch, environment);
    for(int i=0; i<modelInstances.size; i++)
    depthBatch.render(modelInstances.get(i), environment);
    }
    dest.end();

    src = dest;
    src.getColorBufferTexture().bind();
    {
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT
    | GL20.GL_DEPTH_BUFFER_BIT);
    outlineShader.begin();{
    fullScreenQuad.render(outlineShader, GL20.GL_TRIANGLE_STRIP
    , 0, 4);
    }
    outlineShader.end();
    }

    outline shader is copied from here as well ofc.

    But all I get is a white screen with a big black triangle covering 25% of screen (like one black and 3 white fans in a triangle fan). I was expecting to see outlines. If I get this to work I am also not sure how I do to combine the outlines with the "normal" scene? Any help apperciated, thanks.

    • Well, there are a whole bunch of things that could have gone wrong here, so it’s hard to say. The tutorial is also written for the libgdx version that was current at the release date of the article, so some things might have changed in the meantime and broke the code.

      Combining the outlines with the scene is done in another shader pass. You can for example render the outlines to a framebuffer. Then render your scene normally with a shader, that takes the outlines as a texture input and blends everything together in the fragment shader.

      • Andreas sagt:

        Yeah I don’t want to post my whole code here it is pretty bloated with physics and game code. I wonder if you would be interessted in helping me out getting a working cel/outline shader? (I think that Gdx has not changed with regards to shaders since they released the new 3D API). I don’t currently have the time to look more at this now so I would be happy to compensate you if you could help me out. Please contact me if that is the case 🙂

      • Pretty deep into work myself, currently, sorry 😉 Btw, the problem with the triangle on the screen sounds like some wrong vertex coordinates or ordering (probably in addition to another problem).

  4. Samuel sagt:

    Could you post the complete source code example? Please. I tried here but don’t worked.

Kommentar verfassen

Trage deine Daten unten ein oder klicke ein Icon um dich einzuloggen:

WordPress.com-Logo

Du kommentierst mit Deinem WordPress.com-Konto. Abmelden /  Ändern )

Google Foto

Du kommentierst mit Deinem Google-Konto. Abmelden /  Ändern )

Twitter-Bild

Du kommentierst mit Deinem Twitter-Konto. Abmelden /  Ändern )

Facebook-Foto

Du kommentierst mit Deinem Facebook-Konto. Abmelden /  Ändern )

Verbinde mit %s