summaryrefslogtreecommitdiffstats
path: root/report/wiimote_ir.tex
blob: 845f6badce2f55342681d2bcf0a4a8ae9e1b839d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148

\subsection{Wiimote IR input}

As mentioned in the introduction, we used the Nintendo Wii controller for both head tracking and 3D (mouse) input. In both applications we made use of the infra-red sensing ability of the controller. A Wiimote has a build in infra-red camera that is used to track up to four infra-red sources at a time. An on board image processing chip processes the images acquired by the camera and outputs the coordinates of the infra-red sources in camera coordinates, which are reported to the client (pc) via Bluetooth at a frequency of 100Hz. Several sources report that the infra-red camera has a horizontal and vertical resolution of $ 1024 $ by $ 768 $ pixels with a viewing angle of $ 40^{\circ} $ and $ 30^{\circ} $ respectively. However, the author of the wiimote interface library that we used, claimed that his measurements indicated that the reported coordinates never exceeded the range $ [0,1015]\times[0,759] $. Since there are no official specifications of the hardware inside the Wii controller we assumed the resolution of the camera to be $1016 \times 760$. \\

In order to use the wiimote as a 3D mouse, two infra-red sources/beacons are required to be positioned a certain distance apart, above or below the display. The camera coordinates of these beacons are used to calculate world coordinates (in $ \mathbb{R}^3 $) of the mouse cursor. The beacon shipped with the Wii console is misleadingly called the 'sensor bar'. It is a plastic bar that houses two groups of five infra-red LED's positioned at the tips of the bar. The distance between the LED groups is approximately $ 20.5 $ cm. Other, after market sensor bars, that we used during testing and development of the software, contained two groups of three LED's each. \\

To control the 3D mouse cursor, the user points the Wii remote at the display such that both LED groups are registered by the wiimote's camera. The xy position of the mouse cursor is set by pointing the wiimote in the direction of the desired location. The position of the cursor in the z direction can be controlled by moving the remote toward or away from the screen. \\

% In order to use the wiimote as a 3D mouse input device, the user points the wii remote at the display such that the infra-red camera registers the beacons. The camera coordinates of the beacons determine the position of the mouse pointer in screen coordinates. The depth of the mouse pointer is changed by moving the remote toward or away from the display, the distance between the two beacons in camera coordinates is used to calculate the distance of the wiimote relative to the sensor bar.

\subsubsection{Mapping to 2D}

First we will discuss the mapping of the wiimote's infrared output to a 2D mouse cursor position, from there we extend the mapping to 3D. In the ideal situation the wiimote reports the 2D camera coordinates $ g_l $ and $ g_r $ of the left and right LED group respectively. From these coordinates  we calculate a cursor position $ c_{pix} $ in pixel coordinates. Because the screen resolution is likely to be different from the camera resolution, we actually calculate a cursor position $ c_{rel} \in [0,1]^2 $ relative to the camera resolution and use that result to get the actual pixel coordinates simply by multiplying $ c_{rel} $ with the screen resolution. \\

Figure \ref{fig:2d_mapping} shows a diagram of a camera image of $ g_l $ and $ g_r $ and the corresponding screen with cursor position $ c_{pix} $. In our mapping we first calculate $ g_c $ as the average point in camera coordinates between $ g_l $ and $ g_r $. Then $ g_c $ is inverted and counter-rotated before it is mapped to $ c_{rel} $ and finally converted $ c_{pix} $. \\

\begin{figure}[h!]
\begin{center}
\includegraphics{img/2d_mapping}
\end{center}
\caption{Diagram showing the mapping of $ g_l $ and $ g_r $ to $ c_{pix} $.}
\label{fig:2d_mapping}
\end{figure}

%are mapped to 2D screen coordinates. We take the average $ g_c = \frac{g_l + g_r}{2} $ to be the point in camera coordinates where the user is actually pointing to. Before this point can be converted to screen/pixel coordinates it needs to undergo some corrections. First the coordinates are inverted, secondly $ g_c $ is compensated for roll of the wiimote. Because the resolution of the camera and the display are likely to be different, we first compute a relative cursor position $ c_{rel} $ and multiply that by the screen resolution to get the actual cursor position in pixel coordinates.

Because we require only one reference point for the 2D mapping, we choose $ g_c $ to be the average of the two LED groups in camera coordinates, this corresponds with the approximate center of the sensor bar. \\

The inversion of the coordinates of $ g_c $ required to compensate for the fact the camera is being moved instead of the sensor bar. When the camera is pointing upwards, the sensor bar is visible in the bottom of the image, when the camera is pointing downwards the sensor bar is visible in the top of image. The same holds for the left and right. \\

The next step is to counter-rotate $ g_c $ to compensate for the rotation of the wiimote over its z-axis. Note that the user moves the wiimote with respect to the screen's axis and expects the cursor to follow those movements. Because the camera coordinates of $ g_c $ are inverted and scaled to screen resolution, the cursor will only follow the wiimote's movements when the camera's axis are aligned with the screen axis. To align the camera axis with the screen axis we counter-rotate the coordinates of $ g_c $ around the center of the camera. The angle with which we rotate is the angle between the camera's x-axis and the screen's x-axis. Because we assume the sensor bar to be aligned with the screen's x-axis we can simply calculate the angle of $ \overline{g_l g_r} $ with the camera's x-axis. Using this method we can only detect an angle in the range $ [0^{\circ}, 180^{\circ}) $ because the wiimote will only report the coordinates of $ g_l $ and $ g_r $ and not whether they are the left or right LED group. As a solution one can use the accelerometer data to check whether the wiimote is held upside down and set the appropriate sign of the angle. \\

The rest of the mapping quite trivial, let $ g'_c $ be the corrected camera coordinates, $ c_{rel} $ is computed by dividing $ g'_c $ by the camera resolution and $ c_{pix} $ is calculated by multiplying $ c_{rel} $ by the screen resolution. \\

%When the x and y-axis of the screen and camera are alligned, all horizontal and vertical movement of the wiimote with respect to the screen, will result in the expected cursor movements. However, rotating the wiimote will also rotate the  when the wiimote is rotated at angle of $ 90^{\circ} $ over its z-axis,
%The roll correction compensates for the rotation  of the wiimote over its z-axis, which causes the coordinates of $ g_c $ to be rotated.  If the user is holding the wiimote on its side (rotated $90^{\circ}$) and points to the right, the cursor will move down, instead of the expected direction. As a solution, the angle of the line $ \overline{g_l g_r} $ with the x-axis is calculated and is used to counter rotate $ g_c $ over the center of the camera. This rather simplistic solution works when the sensor bar is assumed to be aligned with the screen's x-axis. In order to determine whether the Wii remote is rotated over $ 180^{\circ} $ one can use the accelerometer data, but note that this will only give a reliable orientation estimate when the wiimote is not accelerating.

%As a third correction step one could compensate for the offset in height between the sensor bar and the actual center of the display. Note that we take $ g_c $ to be the average of the coordinates of the two LED groups, in order for the user to put the sursor in the center of the screen, the user has to aim the wiimote at the center of the sensor bar and not the center of the screen, the line of sight over the Wii remote and the position of the cursor on the screen do not coincide.
%Let $ g'_c $ be the corrected coordinates of $ g_c $, the relative coordinates are computed by dividing the x component of $ g'_c $ by the width of the camera in pixels and dividing the y component of $ g'_c $ by the height of the camera resolution in pixels.


\subsubsection{Mapping to 3D}

For the 3D mapping we convert $ g_l $ and $ g_r $ to a coordinate $ c_{rel} \in [0,1]^3 $, relative to an axis aligned box $ \mathcal{B} $ in world coordinates. The box restricts the movement of the 3D cursor and is defined by two corner points $ p_{min} $ and $ p_{max} $ with the minimum and maximum coordinates respectively. From $ c_{rel} $ we compute the world coordinates $ c_{world} $ of the cursor by:
\[ c_{world} = p_{min} + c_{rel} \cdot (p_{max}-p_{min})\]
The relative x and y coordinates are computed in the same way as in the 2D case. The relative z coordinate is calculated from the measured distance $ d $ between the sensor bar  and the wiimote as illustrated in Figure \ref{fig:z_mapping}. \\

\begin{figure}[!h]
\begin{center}
\includegraphics[scale=1]{img/z_mapping}
\end{center}
\caption{}
\label{fig:z_mapping}
\end{figure}

In order to covert $ d $ to a relative z-coordinate $ z_{rel} $ we define distances $ d_{min} $ and $ d_{max} $ such that:
\begin{align*}
z_{rel} & = 0 && \text{iff} \; d \leq d_{min} \\
z_{rel} & = 1 && \text{iff} \; d \geq d_{max} \\
z_{rel} & = \frac{d_{max} - d_{min}}{ d - d_{min}} && \text{otherwise}
\end{align*}

The distances $ d_{min} $ and $ d_{max} $ depend on where the user is standing and have to be initialized according to the user's initial position. The distance between them determines the amount a user has to reach in order to touch the far end of $ \mathcal{B} $. \\

The calculation of the relative z-coordinate requires us to calculate the distance between the Wii remote and the sensor bar. The distance is calculated by measuring the angle between the lines of sight of left and right LED groups from the camera's point of view and calculating the length of a side of a right triangle. Let $ \Delta $ be the distance between the LED groups in the sensor bar. The distance $ d $ between the wiimote and the sensor bar can be calculated as shown in Figure \ref{fig:wiimote_dist_calc}. The angle $ \theta $ is computed as the distance between $ g_l $ and $ g_r $ multiplied by the angle per pixel:
\[ \theta = |g_l-g_r| \cdot \theta_{pix} \]
Because the ratio in camera pixels and the ratio in viewing angles is equal we don't have to distinguish between vertical and horizontal pixel viewing angles and can define the angle per pixel as:
\[ \theta_{pix} = \frac{\pi}{180}\cdot\frac{40}{1016} \]
Assuming that the wiimote is positioned perpendicular to the sensor bar, the distance can be calculated by solving:
\[ d = \frac{\frac{1}{2}\Delta}{\mathsf{tan}(\frac{1}{2}\theta)} \]

\begin{figure}[!h]
\begin{center}
\includegraphics[scale=1]{img/wiimote_dist_calc}
\end{center}
\caption{The diagram for the computation of the distance between the Wii remote and the sensor bar}
\label{fig:wiimote_dist_calc}
\end{figure}

Because the wiimote never is perpendicular to the sensor bar, the distance will be an estimate. If the angle between the wiimote and the sensor bar deviates from $ 90^{\circ} $, being an angle $ \alpha $ in the range $ (0, 180) $, then the measured distance $ |g_l-g_r| $ is a factor $ \mathsf{sin}(\alpha) $ from the actual distance that would be perceived from a perpendicular viewing angle where $ \alpha = 90^{\circ} $. The computed distance $ d $ will therefore be larger then the actual distance. However, the angle $ \alpha $ can be measured when using customized sensor bar with three LED groups with equally space in between. The angle can then be computed from the ratio between $ | g_l - g_c | $ and $ | g_r - g_c | $, where $ g_c $ are de camera coordinates of the center LED group.  \\%a third beacon positioned at equal distances in between the other two. %Our 3D mouse implementation does not require such precise distance calculation, because the \\

\subsubsection{Mapping to head position}

For the headtracking implementation we use a wiimote mounted on top of the display. This wiimote is used to track the position of the users head with respect to the display. We use the position of the head to set a perspective projection such that it originates from the user's point of view. As a result the 3D scene can be observed from different angles and distances, and if done right the display seems to be a window to 3D world behind it. For the wiimote to be able track the position of the head, the user is required to have two infrared sources mounted on its head. \\

The head position is first calculated in real world measurements, relative to the center of the display and then converted to world coordinates. For the best results, the wiimote has to be aligned with the axis of the display such that the wiimote's z-axis is perpendicular to the display.  \\

Figure \ref{fig:headpos_mapping} illustrates the mapping of the camera coordinates to a 3D position in real world measurements. Let $ d $ be the distance from the wiimote to the users head, measured as described in the previous section. The point $ p_{cam} $ is the point in camera coordinates that is mapped to 3D, this point could be the center of the users head or the camera coordinates of the left- or right eye in the case of stereo vision. \\

\begin{figure}[!h]
\begin{center}
\includegraphics[scale=1]{img/headpos_mapping}
\end{center}
\caption{The diagram for the computation of the position of $ p $ in real world measurements.}
\label{fig:headpos_mapping}
\end{figure}

The angles $ \theta_x $ and $ \theta_y $ are measured from distance of $ p_{cam} $ to the center of the camera coordinates, multiplied by the angle per pixel as seen in (b). With these angles we compute the distances $ d_{xy} $ and $ d_{yz} $, these distances represent the length of the line segment from the wiimote to $ p $ projected to xy- and yz-planes. With the projected distances and the angles we compute the x- and y-coordinate of $ p $ as $ d_{xy} \cdot \mathsf{sin}(\theta_x) $ and $ d_{yz} \cdot \mathsf{sin}(\theta_y) $ respectively. The z coordinate is computed using Pythagoras's theorem for the sides of a right triangle.  \\

At this point in the calculation, we have computed the position of $ p $ in real world measurements relative to the camera. In order to get the position relative to the center of the screen we only need to subtract the offset in the y-direction between the center of the screen and the camera, from the y-coordinate of $ p $.  \\

Now that we have the position of the head in real world measurements, we only have to convert $ p $ to world coordinates in order to be able to set a perspective projection that corresponds to the user's point of view. In our implementation we simply used the ratio between the actual screen height and a predefined screen height in world coordinates as a conversion factor.







%In short, the average between $ g_l $ and $ g_r $ is mapped to the x and y-coordinate of the cursor and the distance $ |g_l - g_r| $ is mapped to the z-coordinate of the mouse cursor. But before we get to that we first need to pre-process the raw camera coordinates.



%Let $ b'_r $ and $ b'_l $ be the corrected camera coordinates of the beacons.

%For the virtual world we use a left handed coordinate system with the origin in the lower left corner, such that we look down the negative z-axis. The movement of the 3D cursor is confined by a bounding box $ \mathcal{B} $, the box is defined by two points $ p_{min}, p_{max} \in \mathbb{R}^3 $, the corners with minimum and maximum coordinates.

%The cursor's world

%To compute the position of the cursor in world coordinates, we map the wiimote position to a cursor position $ c_{rel} [0,1]\times[0,1]\times[0,1] $, relative to the dimensions of $ \mathcal{B} $. The final world coordinates of the mouse cursor $ c_{world} $ are computed as follows:

%\[ c_{world} = p_{min} + c_{rel}(p_{max}-p_{min}) \]

%The relative x and y coordinates are trivially computed by averaging $ B_{lc} $ and $ B_{lr} $, yielding the coordinates of the point at the center of the line segment $ \overline{B_{lc}B_{rc}} $ and dividing the x and y coordinates by maximum camera coordinates.

%The relative z coordinate is determined by distance between the wii remote and sensor bar.

% of the left and right group and one for the right group $ g_r $. As mentioned before, the camera coordinates are assumed to be in the range $ [0,1015]\times[0,759] $, the origin is in the lower left corner. %For convenience we define $ C_w = 1016 $ and $ C_h = 760 $ to be the width and height of the camera resolution in pixels and $ S_w $ and $ S_h $ to be the width and height of the screen/application window in pixels.

%Just as for the 2D case we convert the camera coordinates to a relative cursor position $ c_{rel} $ in the range $ [0,1] $. This position is taken to be relative to a bounding box $ \mathcal{B} $, defined by two points $ p_{min} $ and $ p_{max} $, the points with the minimum and maximum coordinates respectively. The actual world coordinates are then calculated as follows:



%We are going to compute the position of the cursor relative to the dimensions of .

%to move the cursor further in the y-direction, the camera  have to be inverted before the are mapped to the xy-plane. This is because the

%In order to determine the xy-postion for a simple 2D mouse cursor, one would simply take the average of $ g_l $ and $ g_l $ to get the center of $ \overline{g_l g_r} $ and convert that to screen coordinates by multiplying by the 2D vector $ (\frac{S_w}{C_w}, \frac{S_h}{C_w}) $.

% the of the cursor we take the average of
%beacons as two points in camera coordinates. Let these two dots be $ L_c $ and $ R_c $, the coordinates of the left and right beacons in camera coordinates. The camera coordinates origin $ (0,0) $ is located in the lower left corner.
%One problem of the multiple led beacons in the sensorbars is that at close range the


% noise in ir dot coordinates

% grouping / smoothing