-
Notifications
You must be signed in to change notification settings - Fork 12
/
Copy pathinverse_lighting.jl
129 lines (110 loc) · 4.49 KB
/
inverse_lighting.jl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
# # Inverse Lighting Tutorial
#
# In this tutorial we shall explore the inverse lighting problem.
# Here, we shall try to reconstruct a target image by optimizing
# the parameters of the light source (using gradients).
using RayTracer, Images, Zygote, Flux, Statistics
# ## Configuring the Scene
#
# Reduce the screen_size if the optimization is taking a bit long
screen_size = (w = 300, h = 300)
# Now we shall load the scene using [`load_obj`](@ref) function. For
# this we need the [`obj`](https://en.wikipedia.org/wiki/Wavefront_.obj_file)
# and [`mtl`](https://en.wikipedia.org/wiki/Wavefront_.obj_file#Material_template_library)
# files. They can be downloaded using the following commands:
#
# ```
# wget https://raw.githubusercontent.com/tejank10/Duckietown.jl/master/src/meshes/tree.obj
# wget https://raw.githubusercontent.com/tejank10/Duckietown.jl/master/src/meshes/tree.mtl
# ```
scene = load_obj("./tree.obj")
# Let us set up the [`Camera`](@ref). For a more detailed understanding of
# the rendering process look into [Introduction to rendering using RayTracer.jl](@ref).
cam = Camera(
lookfrom = Vec3(0.0f0, 6.0f0, -10.0f0),
lookat = Vec3(0.0f0, 2.0f0, 0.0f0),
vup = Vec3(0.0f0, 1.0f0, 0.0f0),
vfov = 45.0f0,
focus = 0.5f0,
width = screen_size.w,
height = screen_size.h
)
origin, direction = get_primary_rays(cam)
# We should define a few convenience functions. Since we are going to calculate
# the gradients only wrt to `light` we have it as an argument to the function. Having
# `scene` as an additional parameters simply allows us to test our method for other
# meshes without having to run `Zygote.refresh()` repeatedly.
function render(light, scene)
packed_image = raytrace(origin, direction, scene, light, origin, 2)
array_image = reshape(hcat(packed_image.x, packed_image.y, packed_image.z),
(screen_size.w, screen_size.h, 3, 1))
return array_image
end
showimg(img) = colorview(RGB, permutedims(img[:,:,:,1], (3,2,1)))
# ## [Ground Truth Image](@id inv_light)
#
# For this tutorial we shall use the [`PointLight`](@ref) source.
# We define the ground truth lighting source and the rendered image. We
# will later assume that we have no information about this lighting
# condition and try to reconstruct the image.
light_gt = PointLight(
color = Vec3(1.0f0, 1.0f0, 1.0f0),
intensity = 20000.0f0,
position = Vec3(1.0f0, 10.0f0, -50.0f0)
)
target_img = render(light_gt, scene)
# The presence of [`zeroonenorm`](@ref) is very important here. It rescales the
# values in the image to 0 to 1. If we don't perform this step `Images` will
# clamp the values while generating the image in RGB format.
showimg(zeroonenorm(target_img))
# ```@raw html
# <p align="center">
# <img width=300 height=300 src="../../assets/inv_light_original.png">
# </p>
# ```
# ## Initial Guess of Lighting Parameters
#
# We shall make some arbitrary guess of the lighting parameters (intensity and
# position) and try to get back the image in [Ground Truth Image](@ref inv_light)
light_guess = PointLight(
color = Vec3(1.0f0, 1.0f0, 1.0f0),
intensity = 1.0f0,
position = Vec3(-1.0f0, -10.0f0, -50.0f0)
)
showimg(zeroonenorm(render(light_guess, scene)))
# ```@raw html
# <p align="center">
# <img width=300 height=300 src="../../assets/inv_light_initial.png">
# </p>
# ```
# We shall store the images in `results_inv_lighting` directory
mkpath("results_inv_lighting")
save("./results_inv_lighting/inv_light_original.png",
showimg(zeroonenorm(render(light_gt, scene))))
save("./results_inv_lighting/inv_light_initial.png",
showimg(zeroonenorm(render(light_guess, scene))))
# ## Optimization Loop
#
# We will use the ADAM optimizer from Flux. (Try experimenting with other
# optimizers as well). We can also use frameworks like Optim.jl for optimization.
# We will show how to do it in a future tutorial
opt = ADAM(1.0)
for i in 1:401
loss, back_fn = Zygote._pullback(light_guess) do L
sum((render(L, scene) .- target_img) .^ 2)
end
@show loss
gs = back_fn(1.0f0)
update!(opt, light_guess.intensity, gs[2].intensity)
update!(opt, light_guess.position, gs[2].position)
if i % 5 == 1
save("./results_inv_lighting/iteration_$i.png",
showimg(zeroonenorm(render(light_guess, scene))))
end
end
# If we generate a `gif` for the optimization process it will look similar to this
# ```@raw html
# <p align="center">
# <img width=300 height=300 src="../../assets/inv_lighting.gif">
# </p>
# ```