In this project, you will implement a virtual scene that reproduces the famous Cappadocian hot air balloons, thus, the concepts of lighting and texturing will be required. The main visual elements that compose the scene will be a textured terrain and modified in height by a shader, the hot air balloons, as well as a setting sun.
Virtual stage
The scene will be composed of a series of 3D models, namely:
Land
Formed from a plan (its construction is specified in the following chapters)
Balloons
Made up of several primitives (their construction is specified in the following chapters)
The Sun
Formed from a sphere
Camera positioning
The camera will be perspective and must be placed so that it includes all the elements of interest in the scene (the terrain, the balloons, and the sun) in the frame. It is not necessary to allow control of the camera's position or rotation during application execution, as it can remain fixed.
Balloon construction
The balloons in the scene are built using a series of primitives. The recommended version has the following components:
Nacelle – parallelepiped
Balloon body – sphere (with the deformation proposed below)
Connectors – 4 parallelepipeds
(See attached image)
Deformation in Vertex Shader
To get a result that is close to the real appearance of a hot air balloon body, we need to apply a simple elongation deformation to a sphere. The process involves programmatically moving, in the Vertex Shader, the vertices in the lower half of the sphere vertically. A visual representation of the result you will get is attached.
You will do this deformation by referring to the object coordinates of the sphere. A simple check of the model available in the framework ([login to view URL]) will show you that it is defined at the origin and has radius 1. The vertices that are moved have a y coordinate between [0, -0.5] in object coordinates. Notice that the reference moves differently depending on this value.
Texturizer
Each component of a balloon will be textured. For the nacelle and connectors you can use the same texture for all balloons. The body must be colored by randomly choosing from a set of textures. It is mandatory that it be textured differently from the rest of the components and that the choice of texture be made from a set of at least 5. An example is attached.
Balloon behavior
The balloons in the scene must rotate on concentric paths. The paths are in the form of circles parallel to the planeXOZ
and with the common center. For this you need to choose a point in the planeXOZ
, C
(for example(0,0)
), to designate the center of the circles.
For each balloon, a height must be chosen (preferably randomly)H
to rotate to (so the center of the trajectory will be(C.x,H,C. and)
), a radiusR
of the circle describing the trajectory and a rotational speedoh
The result will be a series of concentric trajectories parallel toXOZ
and parallel to each other, at different heights above the ground.
In addition, for a more realistic effect, each balloon will have an oscillation on its axis.O Y
in the form Δ y=A⋅sin(ω⋅Δt)
(where A and oh will be chosen so that the visual effect appears realistic).
Land construction
The terrain is initially flat, but then the height of its peaks is modified on the GPU, as explained in the section below. For this modification to be possible, it is necessary to subdivide it.
Deformation in Vertex Shader
using a height map
You will perform the deformation in the Vertex Shader by taking the height information from the texture, to which you will apply a constant scaling factor depending on the desired result.
// GLSL code
const float Y_OFFSET = 0.5;
// Get vertex height from the height map
new_position.y = Y_OFFSET * texture(texture_1, v_texture_coord).r;
The result you will get is attached.
Texturizer
The terrain, represented by the subdivided plane, will be textured using its texture coordinates. To achieve an interesting effect you will use two color textures, along with the depth texture. Each fragment is colored according to height. For a uniform transition, linear interpolation is required in the areas of medium height. The idea is highlighted in the attached diagram.
The function indicating the degree of interpolation must normalize the midrange[LOW_LIMIT,HIGH_LIMIT]
in[0,1]
This may look like this:
f(height)=height−LOW_LIMITHIGH_LIMIT−LOW_LIMIT
The result you should get is attached.
Normal recalculations
The deformation applied in the vertex shader is applied to the position of each vertex, following the methodology presented above. However, remember the formulas from the lighting calculations - they are based on the surface normal vector (N⃗
), which remained unchanged after the vertex displacement, which leads to an erroneous lighting result.
In the following image the deformed terrain and a point light are drawn - notice the differences between the images, on the left the plane normals are all oriented towards(0,1,0)
, and the image on the right illustrates the result after recalculating the normals.
At a conceptual level, we will use the finite difference method to approximate these normals. For the texture coordinates of each vertex, we will sample the height map, where we will use the neighboring texels to determine these normals.
To calculatetexelSize
, we assume that the heightmap is a square image, and its size (resolution) isdim
The texel represents a discrete unit of texture, and its relative size in UV coordinates can be calculated as follows:
texelSize =1dim
The support texture (height map) is sampled to determine the “height” of the texels (essentially, the brightness value of the texel is identified, which we denote by (h
)). Texture coordinates are used (intexCoord
) to extract the corresponding height value (h
), but also the height values of its neighbors, along the X and Z axes.
h=texture(heightMap,intexCoord).rhright=texture(heightMap,intexCoord+2⃗ ( texelSize , 0 ) ) . rhup=texture(heightMap,intexCoord+2⃗ ( 0 , texelSize ) ) . r
The next step is to calculate the gradients in the X and Z directions. Vertical scaling factor(andoffset)
represents the value previously used to adjust the terrain height based on the height map:
Δ x = (hright−h)andoffsetΔz=(hup−h)andoffset
Based on these variations, we construct two vectors, the tangent and the bitangent in the object space :
T⃗ YOU=3⃗ ( texelSize , Δ x , 0 )B⃗ YOU=3⃗ (0,Δz, texelSize )
introducingtexelSize
in these vectors to ensure that the horizontal distances correspond exactly to the size of a texel in object space, so the result does not depend on how we scale the terrain.
We apply the modeling matrix to obtain the vectors in world space :
T⃗ INS=(Maboutel⋅4⃗ (T⃗ YOU,0))xyWithB⃗ INS=(Maboutel⋅4⃗ (B⃗ YOU,0))xyWith
By moving from(T⃗ YOU,B⃗ YOU)
the(T⃗ INS,B⃗ INS)
, we guarantee that the resulting directions take into account all modeling transformations applied to the terrain, thus ensuring that the final normal(N⃗ )
, obtained by the vector product betweenB⃗ INS
andT⃗ INS
, correctly reflects the shape of the terrain in world space.
N⃗ =normalize(cross(B⃗ INS,T⃗ INS))
The attached image provides visual support for vectors and the result of the cross product.
Finally, the attached animation illustrates how the normals are recalculated based on the terrain height, as well as the result obtained after adding a directional light.
light
The scene lighting will be achieved using at least 2 types of light sources: point and directional . Each light source (regardless of its type) will have a specific color and this color must be taken into account in the lighting calculations.
Lighting must be implemented using the Phong shading model , as well as the Phong light reflection computational model .
Point light: This is represented by sources that emit light with the same intensity in all directions. This type of light must be implemented for each hot air balloon : The light must remain at a fixed position relative to the balloon and have a random color and light intensity.
In the attached image, only the point lights have been activated.
Your implementation must support rendering multiple light sources in the same frame!
Directional light: This will illuminate all objects in the scene with the same intensity. What is specific to directional light is that the incident light vectorL
does not depend on the position of the light or the fragment to be illuminated (as in the case of point and spot lights). So, for each fragment, the illumination will be calculated using the same vectorL
(corresponding to the direction of the light). Thus, for a directional light source, it is necessary to define its direction and the color of the light emitted. In this topic, we will consider the sun as the `source` of this directional light, so you will need to set the direction of this light accordingly.
In the attached image, only the directional light has been activated.