Data Science

Python: Numerical Calculation of the Maximum Likelihood Estimate

In this post I want to talk about regression and the maximum likelihood estimate. Instead of going the usual way of deriving the least square (LS) estimate which conincides with the maximum likelihood (ML) under the assumption of normally distributed noise, I want to take a different route. Here, instead of using the analytical LS solution, I want to show you, how we can numerically compute the ML estimate.

Note: If you want to see the python code, check out this link.

Recap Likelihood
To give you a smooth start, lets quickly recap the definition of likelihood (also data likelihood). In the context of supervised learning, the likelihood function gives an information about how likely it is to observe a data set $D$ given an assumed structure of the underlying system that produced the data (here called model structure $\mathcal{M}$). By using a parameter vector $\theta$ to describe a concrete instance of the model, the likelihood function would then describe the dependent probability of seeing the data given a concrete model $\mathcal{M}(\theta)$.

$$L(\theta) = p(D\ |\ \mathcal{M}(\theta))$$

The data $D$ consists of $N$ measured input and output data points (here, $x_i$ and $y_i$ values). Note that each value $x_i$ and $y_i$ could also be vectors for systems with multiple inputs or/and outputs.
$$D = X,Y \ \text{with}\  X = [x_1,\dots,x_N], Y = [y_1,\dots,y_N] $$

Maimum Likelihood Estimate (MLE)
If we assume that the structure $\mathcal{M}$ of the underlying system is known, our job is to find those model parameters, which most likely represent the underlying true system. Of course, the above assumption of knowing the model structure a-priori is a strong one and can not always assumed to be true. Now, if we hypothetically have had a likelihood function that assigns a likelihood to each and every parameter, we could simply choose that parameter with the highest likelihood which is called the maximum likelihood estimate. In the following, we will discuss how to calculate this likelihood function.
Note: In contrast to using the least square estimate to directly calculate the MLE, calculating the likelihood function manually has a great advantage which we will exploit in the next article. Basically, by using the least square estimate, we only retrieve the most likely parameter but not its likelihood. Moreover, we dismiss all parameters, no matter how liekliy they might be. This way, one might end up with a parameter which is almost as likely as others but completely ignores other, also plausible solutions.

Example with Line Model
$$M(\theta) = b + m\cdot x,\quad \theta = [b,m]$$
To make things a little more intuitive, lets look at the example below. In the first plot, the black points represent our data set. It is easy to see that the points follow a line with some added noise. Remember that a line can be described by two parameters, its offset $b$and its gradient $m$ (the formula is given above).
In the second plot, we added three lines with different parameters to the plot. It is obvious that the blue line best describes the data points because its deviates least from the data points. To put this in a different, more probabilistic perspective, we could also argue that observing the given data is more likely if the measurements were taken from a system that has the shape of the blue rather than the red or the green line. As a consequence, the parameter vector describing the green and red line should get assigned a smaller data likelihood.

download

Numerical calculation of likelihood

The example above illustrates, that we can intuitively assign likelihoods to different lines. But how do we numerically calculate it for different parametrizations? Again, we have to make some more assumptions before we can perform the calculation.
We assume that the data has been disturbed with white gaussian noise with zero mean and standard deviation $\sigma$ which must be known a priori (see below). Then, the data can be described using the following formula. This simply means that the data is normally distributed around the model itself which can be described by rearranging the formula.
$$\begin{align} &y = \mathcal{M}(x,\theta) + v,\quad v \sim \mathcal{N}(\mu=0,\sigma) \ \Leftrightarrow \quad &y \sim \mathcal{N}(\mu=\mathcal{M}(x,\theta), \sigma) \end{align}$$
In practise, $\sigma$ could be set to the measurement variance which might be known from a data sheet. In the code example above, we used a standard deviation of $\sigma = 0.5$ to create the artificial data. To make things more realistic, we assume that we have no prior knowledge and thus assume a value of $\sigma = 2.0$ as a rough estimate.
As the probability is assumed to be normally distributed around the model, it is highest if the data and model values are the same and becomes smaller if it deviates from the model. The plot below illustrates the probability around the model by plotting the first two standard deviations around the three lines.

download-1

We can now calculate its likelihood for each parameter configuration making use of the above assumption. We define the likelihood as the product of the probability of each data point. Note that in the equation below, we left out the constant factor of the normal distribution because it does not depend on the model and we are only interested in finding the model with highest likelihood, not its exact likelihood value.
$$L = \prod^N_{n=1} p(y_n|\ \theta,\ \sigma) = \prod_{n=1}^N \mathcal{N}(y_n\ |\ \ \mu=\mathcal{M}(x_n,\ \theta),\ \sigma) \sim \prod^N_{n=1} exp(\frac{(y_n -\mathcal{M}(x_n,\ \theta))^2}{2\cdot \sigma^2})$$

If we calculate the likelihood for the three lines, we can see, that the blue line has the highest likelihood because the data points deviate the least from the model (left plot). Note that we use the 10th root of the likelihood because the differences in the likelihood are so huge, that the values of the red and green line would be completely invisible if we used the values directly. This is because, we assume that the level of noise (represented by $\sigma=1.5$) has been chosen rather small.
By increasing $\sigma$, we would assume that our data is distorted by noise which a much greater amplitude. As a consequence, it would be harder to tell which line the data points belong to. The example below shows, how the likelihood between all three lines becomes increasingly similar, as we increase sigma between values of $\sigma \in [1,30]$ (right plot).

download-2

So far, we only calculated the likelihood for the three randomly chosen lines with fixed parameters. Now, to find the maximum likelihood estimate for the parameters $m$ and $b$, we calculate the likelihood for a whole range of parameters. To do so, we create a whole range of parameter pairs (meshgrid). To limit the computational demands, we use parameter boundaries of $m\in[0,3]$ and $b \in [0,3]$ and a resolution of 50 resulting in $50\cdot50=2500$ parameter combindations for which we have then calculate the likelihood.
Since we have two parameters, we can visualize the likelihood using a two dimensional contour plot. The darkness then indicates the likelihood of the parameter pair. Since the likelihoods differ by many magnitudes, we again transform their values using the 10th root to improve visibility of the shape of the likelihood function.
We also added the parameter values of the three lines to the plot. As expected, the blue parameter pair is much closer the maximum of the likelihood function as the green and red line parameters.
Then, we get the index of the maximum value in the meshgrid of likelihood values. We use this array index to access the belonging parameter from the array (meshgrid) of parameters. As we can see, the maximum likelihood parameter (yellow), is a little offset to the true parameter value, that is the value which was used to generate the data. This is because of the random noise which has been added to the data. Due to the normally distributed noise, and the limited number of data points, the data points are not evenly distributed along the true line. As a result, a different, slightly offset line is calculated.

download-3

The error becomes increasingly small as the number of data points grows. In the plot below, we compare the MLE with respect to the number of data points used for calculation of the Likelihood function.
As one can see, only using a single data point creates a very broad likelihood function with no clear maximum. As a consequence, no single maximum exists. Since we are only using a single point, we basically assign every line that goes through this point equal probability.
Using more data points, the maximum slowly builds up around the original parameter value. As we use more and more data points, the maximum becomes more prominent reflecting a higher likelihood for parameters in this region. While in theroy, two points would be enough to exactly define a line, the likelihood function also considers the uncertainty of the measurements

download-4

Advertisements
Data Science, python

Python: Stochastic Prediction Model of the Least Square Estimate

If you want to see the code with syntax highlighting, download the gits for this post from my github.

In the  previous post, we looked at the numerical calculation of the maximum likelihood estimate (MLE). As you might know, we can obtain the same solution in a much easier way using the method of least squares. It can be shown that solving for the maximum likelihood under the assumption of normally distributed data gives the same solution as minimizing the quadratic error which is given below. Here, $\hat{y}$ and $y$ represent the predicted (model outputs) and the measured output values respectively.

$$\sum_{n=1}^N (\hat{y}_n-y_n)^2$$

Instead of calculating all likelihood values over the whole range of parameters and picking the parameter resulting in a maximum, solving the least square problem can be done in a single step called the linear least square estimate (LSE). We can calculate this solution using the numpy function np.linalg.lstsq(X,Y). Here, X and Y are the so called regression matrix and output vector. The least square problem can only be directly calculated if the underlying model has a linear form which means, that each input value is factorized by a corresponding parameter. In the case of the line model, this is the case and we can write it in a vectorized notion which is also called linear regression model.

$$y = \underbrace{[1,x]}_{\text{regression vector}}\cdot \begin{bmatrix} b\\ m \end{bmatrix}$$

For each data point, the regression vector has only two values: The bias term $1$ to allow a constant offset defined by the bias parameter $b$ and the $n_{th}$ input value $x_n$ which gets multiplied by the gradient $m$. Now, for each of the N measured data points, we create the corresponding regression. For each output $y_n$ we finally end up with a set of linear equations which can be written in the following matrix notation. The lstsq(X,Y) function exactly takes this regression matrix $X$ and vector of outputs $Y$ as arguments to solve for $\theta$ that minimizes the quadratic error. Now lets apply this to the problem of the previous post and compare both solutions.

$$ Y = \begin{bmatrix} y_1 \\ \vdots\\ y_N \end{bmatrix} = \underbrace{\begin{bmatrix} [1,x_1] \\ \vdots\\ [1,x_N] \end{bmatrix}}_{X} \cdot \begin{bmatrix} b\\ m \end{bmatrix} $$

import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline

def line(x,b,m):
    return b+m*x;

#create 10 data points around a line with b = 2, m = 1 with added noise
mb_true = [2.0,1.0];
std_true = 1.0;
N = int(30);
x = np.vstack(np.linspace(0,4,N));
y = line(x,mb_true[0],mb_true[1]) + np.vstack(np.random.normal(0,std_true,x.shape[0]));


#Calculate MLE
def gaussian(x, mu, sig):
    return np.exp(-np.power(x - mu, 2.0) / (2.0 * np.power(sig, 2.0)))
def likelihood(x,y,model,std):
    mu = model(x);
    ps = gaussian(y,mu,std);
    l = 1;
    for p in ps:
        l = l*p;
    return l;
#create array to cover parameter space
res = 20;
M,B = np.meshgrid(np.linspace(-3.5,9,res),np.linspace(-0.5,2.5,res));
MB = np.c_[M.ravel(),B.ravel()];
#calculate likelihoods
sigma = 2.0
L = np.array([likelihood(x,y, lambda x: line(x,mb[0],mb[1]),sigma) for mb in MB]).reshape(M.shape)
#select parameter with maximum likelihood
mb_max = np.array([M[np.unravel_index(L.argmax(),L.shape)],B[np.unravel_index(L.argmax(),L.shape)]]) 


#Calculate the LSE
def lineRegressor(x):
    phi = np.concatenate((np.ones([len(x),1]),x),axis=1);
    return phi;
X = lineRegressor(x);
least_square_result = np.linalg.lstsq(X,y)
mb_ls = least_square_result[0].ravel();


#Draw results
x_temp = np.linspace(-5,15,2);#used to lines beyond the data points
f,(ax1,ax2) = plt.subplots(1,2,figsize=(12,4));
#draw data points and true line
ax1.plot(x, y,'k.',markersize=15,label='data points');
ax1.plot(x_temp, line(x_temp,mb_true[0],mb_true[1]),'k--',label='True');
ax1.set_xlabel('x');
ax1.set_ylabel('y');
#draw estimated lines
ax1.plot(x_temp, line(x_temp,mb_max[0],mb_max[1]),'y-',label='MLE');
ax1.plot(x_temp, line(x_temp,mb_ls[0],mb_ls[1]),'m-',label='LSE');
#draw likelihood
ax2.contourf(M,B,np.power(L,0.1),cmap=plt.cm.binary);
ax2.plot(M,B,'w+',markersize=4,alpha=0.5);
ax2.set_xlabel('b');
ax2.set_ylabel('m');
ax2.set_title('data likelihood');
#mark parameter estimate
ax2.plot(mb_max[0],mb_max[1],'yo',markersize=10,           label='MLE');
ax2.plot(mb_ls[0],mb_ls[1],'mo',markersize=10,label='LSE');
ax1.legend();
ax2.legend();

As we can see, we end up with almost the same estimates for both the MLE and the LSE. The small difference is due the quantization error. If we had used a higher resolution (more control points indicated by the white + signs in the right plot), we also would have found an estimate closer to the maximum. However, at the same time, a higher resolution results in an increased number of computations. In our case with two parameters, the number of compuations in fact increases quadratically. This being said, we already see that calculating the LSE directly is far more efficient than numerically calculating the MLE.

Predictive distribution

Now that we have estimated the parameters $b$ and $m$ using the method of least squares, we can use our model to predict new y values for yet unseen inputs x. In the left plot below, we predicted points for x values below zero and above 5 which were not covered by the data.
It becomes obvious that the predicted values, while layin on the estimated line, do not reflect the stochastic part of the data. In reality it might be important to also model the uncertainty when predicting new values. We can do this by including a random variable $v$ to the model.

$$y = b+m*x+v,\ v \sim \mathcal{N}(0,\sigma)$$
Again, we make an assumption about $v$ that it is normally distributed noise with zero mean (the mean becomes the line itself) and a standard deviation of $\sigma$. To find $\sigma$, we have to calculate the variance $\sigma^2$ of the data which is defined as the normalized sum of squared errors (residuals). Note that we have used the same assumption about out data when we calculated estimates using the maximum likelihood method. In our case the assumption holds true as the data has actually been disturbed using white gaussian noise.

$$\sigma^2 = \underbrace{\sum_{n=1}^{N} (\hat{y}(x_n)-y_n)^2}_{\text{RSS}}\frac{1}{N}$$

Instead of calculating $\sigma$ manually, the lstsq(X,Y) function already gives the sum of squared residuals (RSS) as the second return value. This value reflects the non-normalized variance of the residuals and dividing it by the number of data points and taking its square root, one obtains the standard deviation $\sigma$.

$$\sigma = \sqrt{\sigma^2} = \sqrt{\frac{RSS}{N}}$$

Now, we predict a new value by adding normally distributed noise of standard deviation  to the output. Thus, the predicted output itself becomes a random variable. We can visualize this distribution by drawing the first few standard deviations $\sigma$ around the model.
Note: Bishop calls this distribution predictive distribution as it assigns a probability to each predicted value. However, this is very confusing as he also uses the term to describe the fully bayesian approach of modeling model uncertainty which we will cover in the next post.

f, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(14,4),sharey=True);

#predict new data points
x_predict = np.concatenate((np.linspace(-5,-1,10),np.linspace(6,15,20)));
y_predict = line(x_predict,mb_ls[0],mb_ls[1]);
#draw
ax1.plot(x_temp,line(x_temp,mb_ls[0],mb_ls[1]),'m--',label='LSE');
ax1.plot(x, y,'k.',markersize=15,label='$y$');
ax1.plot(x_predict,y_predict,'m.',markersize=15,label='$\hat{y}$');
ax1.set_title('predictions');
ax1.set_xlabel('x');
ax1.set_ylabel('y');
ax1.legend();

#stochastic prediction
rss_ls = least_square_result[1]; #sum of squared residuals
std_ls = np.sqrt(rss_ls/N);
y_predict = line(x_predict,mb_ls[0],mb_ls[1]) + np.random.normal(0,std_ls,len(x_predict));
#draw
ax2.plot(x_temp,line(x_temp,mb_ls[0],mb_ls[1]),'m--');
ax2.plot(x, y,'k.',markersize=15);
ax2.plot(x_predict,y_predict,'m.',markersize=15);
ax2.set_title('stochastic prediction with $\sigma$ = %2.2f' %(std_ls));
ax2.set_xlabel('x');

#visualize predictive distribution
def variancePolygon(axes,model,xmin,xmax,resolution,std,alpha=1,color='r',drawUpTo=1,label=''):
    Y = list()
    X = np.linspace(xmin,xmax,resolution);
    for x in X:
        y = model(x)
        Y.append(y);
    Y = np.array(Y);
    for i in range(1,drawUpTo+1): 
        temp_std = std*float(i);
        temp_alpha = 0.5*float(drawUpTo-i+1)*alpha/(drawUpTo);
        plt.fill(np.append(X,np.flipud(X)),np.append(Y-temp_std,np.flipud(Y)+temp_std),color,alpha=temp_alpha,label="%i $\sigma$"%i)
    axes.legend();
variancePolygon(ax3,lambda x: line(x,mb_ls[0],mb_ls[1]),-5,15,2, std_ls,1.0,'m',3,'MLE');
ax3.plot(x, y,'k.',markersize=15,label='data points');
ax3.plot(x_predict,y_predict,'m.',markersize=15,label='predictions');
ax3.set_title('predictive distribution');
ax3.set_xlabel('x');
 
MATLAB

Ambulatory Glucose Profile – MATLAB tutorial


This post is meant to give a very short demonstration on how to generate a AGP-plot from measured CGM data. AGP stands for ambulatory glucose profile and is basically a visual condensed representation of multiple days of continuous glucose measurements. It allows to visually examine time ranges that are problematic and require adoption of the therapy. It does so by showing how much the glucose levels fluctuate over the course of a day. Therefore, data of multiple days is used to generate inter quartile range plots. Since I did not find too much information about how to generate these plots, i thought a quick and dirty tutorial might be helpful. For more information look here.

So let’s start!

At this point, we assume that the CGM data has already been preprocessed in a ways that each day is reduced to 24 hourly average values (x = hour, y=glucose value). Given data of 14 consecutive days (assuming no missing values), we have a total of 14*24 = 288 glucose readings.  The sample code below creates a random dataset with  a peak in placed around 6 AM (hyperglycemia in the morning) and a minimum at around 6 PM (hypoglycemia in the late afternoon).

x = 0:1:24';
y = 100+100*rand(14,25).*repmat(sin(x*2*pi/25)+0.7,14,1);
h = plot(x,y); hold on;
xlabel('hour of day');
ylabel('glucose concentration in mg/dL');
xlim([0,24]);

We can plot the mean glucose profile by calculusing the average for each day hour of all fourteen days. To do so, we use the mean function of MATLAB:

havg = plot(x,mean(y),'r','lineWidth',3);

Now, the 25/75%-IQR inter quartile ranges shall be added to the graph. This range covers 50% of the most common glucose values and gives a good indicator about how much the glucose values fluctuate. To show the upper and lower bounds, we have to calculate the 25 and 75 percentiles for each hour of the day.

p25 = prctile(y,25);
p75 = prctile(y,75);
h25 = plot(x,p25,'b--','lineWidth',3);
h75 = plot(x,p75,'b:','lineWidth',3);
legend([havg,h25,h75],'mean','25 percentile','75 percentile');

We want to fill the area between the lower and upper percentiles to better visualize the range. We can do this using Matlab’s fill function. The function takes a polygon in the form of x,y-coordinates and fills the area using a solid color. From both percentile curves, we create a polygon by simply appending the lower curve in reverse order to the upper curve. The x values must therefore also be reversed using the fliplr function.

h2 = fill([x,fliplr(x)],[p75,fliplr(p25)],'b','');
set(h,'facealpha',0.7);

The edgy appearance is due to the small number of intermediate points (we only have 24 values per curve). If we would generate more support points like for every minute of the day or so, the problem would not be so dramatic and eventually could be ignored. Here, the shape looks very unsatisfying and should be visually improved. Therefore, we create a smooth transition between the intermediate points using spline interpolation utilizing MATLAB’s interp1 function. It takes the original x and y values at first and then the new x positions for which the interpolated values shall be generated. The last argument indicates the method used for generating intermediate points.
interp1(x,y,xnew,'spline');
Thus, we first have to create the interpolated curves using 200 intermediate points which is enough to create a smooth appearance. Then we repeat the above procedure to fill the area between them. The figure below shows the smoothed results.

xInterp = linspace(0,24,100);
p25Interp = smooth(x, p25, xInterp);
p75Interp = smooth(x, p75, xInterp);
h2 = fill([xInterp, fliplr(xInterp)], [p25Interp, fliplr(p75Interp)],'b');
set(h2,'facealpha',0.7);

We can add another inter quartile range. Traditionally, this would be the 10/90%-IQR.

p10 = prctile(y,10);
p90 = prctile(y,90);
p10Interp = interp1(x,p10,xOS,'spline');
p90Interp = interp1(x,p90,xOS,'spline');
h3 = fill([xOS,fliplr(xOS)],[p90OS,fliplr(p10OS)],[0,0,1]);

The average value should also be smoothed and added to the plot at the very end so that it is not covered by the filled polygons. Note that we used a smaller alpha value for the second IQR to distinguish between them. The overall result and code is shown below.


 

%% create random sample data: hourly average glucose values for 14 days
x = 0:1:24';
y = 100+100*rand(14,25).*repmat(sin(x*2*pi/25)+0.7,14,1);

%draw data and average curve
h = plot(x,y); hold on;
xlabel('hour of day');
ylabel('glucose concentration in mg/dL');
xlim([0,24]);
havg = plot(x,mean(y),'r','lineWidth',3);
pause();

%% draw percentile curves
p25 = prctile(y,25);
p75 = prctile(y,75);
h25 = plot(x,p25,'b--','lineWidth',3);
h75 = plot(x,p75,'b:','lineWidth',3);
legend([havg,h25,h75],'mean','25 percentile','75 percentile');
pause();

%% fill 25/75 IQR
hIQR = fill([x,fliplr(x)],[p75,fliplr(p25)],[0,0,1]);
set(hIQR,'facealpha',0.7);
pause();

%% fill smoothed version of 25/75 IQR
%first, we remove the old plot
delete(h25);
delete(h75);
delete(hIQR);
%smooth using spline interpolation
xInterp = linspace(0,24,200);
p25Interp = interp1(x,p25,xInterp,'spline');
p75Interp = interp1(x,p75,xInterp,'spline');
%draw
hIQR = fill([xInterp,fliplr(xInterp)],[p75Interp,fliplr(p25Interp)],[0,0,1]);
set(hIQR,'facealpha',0.7);
legend([havg,hIQR],'mean','25/75-IQR');
pause();

%% add a smoothed 10/90-IQR
p10 = prctile(y,10);
p90 = prctile(y,90);
p10Interp = interp1(x,p10,xInterp,'spline');
p90Interp = interp1(x,p90,xInterp,'spline');
hIQR2 = fill([xInterp,fliplr(xInterp)],[p90Interp,fliplr(p10Interp)],[0,0,1]);
set(hIQR2,'facealpha',0.3);
legend([havg,hIQR,hIQR2],'mean','25/75-IQR','10/90-IQR');
pause();

%% add smoothed mean curve
yavgInterp = interp1(x,mean(y),xInterp,'spline');
havg = plot(xInterp,yavgInterp,'r','lineWidth',2);
legend([havg,hIQR,hIQR2],'mean','25/75-IQR','10/90-IQR');

DIY, Electronics

DIY concrete lamp with switch and power outlet

fullsizeoutput_4a33
Today I finished my new project, a concrete lamp with power outlet and switch for the lamp. Usually, you will only find the one or the other. Actually I’ve built a concrete lamp before butwithout the switch and a power outlet.

To build the lamp, i followed the instrocutions of this youtube video. I used 5 pieces to build a 13cm*13cm*13cm cube. I mounted two flush boxes to hold the power outlet and the switch.

If you don’t know what you’re doing, don’t do it! Attention: Concrete might actually be conductive!

Mounting the boxes was actually more difficult than expected because the boxes overlapped by half a centimeter so I had to trim of a bit. I drilled holes for the screws to hold the boxes in place. I also used some quick concrete to seal the gaps so that no concrete could enter the boxes.

After the concrete dried (like 4 days), I carefully drilled the holes to mount the outlet and switch. Make sure your switch is a two-pole switch! Otherwise your lamp socket would eventually still be connected to phase depending on which way you plug in the power cord! Afterwards I applied two coats of concrete sealer.

Finally, I only had to connect the plug to the wire. I used white again to match the overall design which turned out to be a good choice I think. I used white heat shrink tubing as protection of the fabric cord. And finally …tada: done! Feel free to copy and comment!

 

App development

UIViewController offsets scrollview subviews

I was just wondering why my Scrollview Content would always beoffset to what i set it up in Storyboard (Xcode). I use a UIViewController and changed it’s view’s class to UIScrollView.

The layout of my scrollview in Xcode Storyboard
The layout of my scrollview in Xcode Storyboard

The problem was, that all the content which i played out in Xcode was offset by the height of the toolbar during runtime. This did not happen when the view was of class UIView, as by default. I therefore tried to set the subviews frames during runtime but this seems to be restricted by the loaded nibs somehow.

Subviews have a offset in y direction with a height of the toolbar
Subviews have a offset in y direction with a height of the toolbar

for(UIView* subview in self.view.subviews){ [subview setFrame:CGRectOffset(subview.frame, 0, -[self navigationController].toolbar.frame.size.height)];}

I finally found the reason for my problem in the attributes inspector of my ViewController.

Turn "Adjust Scrollview Insets off"
Turn “Adjust Scrollview Insets off”

The Option “Adjust Scroll View Insets” must be disabled for this purpose. It’s probably something which tries to prevent the offset from being hidden by the toolbar. But using the layout in interface builder to manage your subviews obviously interferes with this option. You should therefore turn it off!

offset removed by turning off "Adjust Scrollview Insets"
offset removed by turning off “Adjust Scrollview Insets”

Cheers,

Jan

Electronics

HSV RGB IKEA lamp + IR Remote

My first Awesome IKEA lamp had multiple trimmers and push buttons to setup speed, brightness and color. I now added a remote control to get rid of all these. I chose one of these common rgb remote controls that are shipped with every other cheap rgb lamp. I got it from dx.com. The led i use is a 3W rgb star led from ebay with the following specifications:

Red=2,4V, Green=3,4V, Blue=3,4V, 350mA per channel

Make sure you do not get one with common “+”; if so you can’t use p mosfets but must use the n version instead. Common ground is no problem. You’ll get a complete partlist from the eagle file. The main changes from the original layout are:

  • P_MOSFET: IRLML6402
  • IR-RECEIVER: TSOP 31238
  • IR_REMOTE: CR2025

hsvrgblamp1.0

I uploaded everything including the schematic, board and the code: Download!

The code is neither commented nor nice. Feel free to improve it 😉 There is a little problem with this version; the p-mosfets seem to have great leakage current so the leds wont turn off completely. I think the PORTD has to little output power…

Electronics

Awesome HSV RGB IKEA lamp

When my girlfriend gave me her birthday present, i was overwhelmed. The drawing was perfect, I loved the motive, details and shadows. And it came that she had her own birthday! This was the moment i decided not to buy one of her wishlist’s entrys (yes i keep track) but build something by my own too (let’s just stick to buying things okay?). I thought about painting a picture too but decided against it. I just didn’t want to reveal my inner artist and make her jealous.

Since im a student of it and electronics, i chose to build something wired.  It had to be cool but also instructive for me i though. I always planned on building a “proper” RedGreenBlue LED Lamp which could fade between colors. But i never found the time nor the motivation to do something símilar in my freetime. The first attempts on my Arduino Board were pretty basic. The colors were fading, yes, but randomly which means (in RGB color space) pretty white changes to a white reddish to bright white reddish to a bright white reddish with a touch of green or blue. This wasn’t  that exciting at all. I “went” to the mikrocontroller.net community and asked about color spaces. My idea was to make a distinction between color and brightness and avoid same colors with just different portions of white. I found that the hsv (hue saturation value) color space met my needs. It allowed me to use a simple potentiometer for choosing the color (hue) and a further one for the brightness, while the saturation remains constant (no white parts please ;-))

I started coding (checkout github) the code for an atmega8 controller and tested it on a breadboard first. I found and bought a perfect fitting aluminium globe at IKEA which could hold both the led and the electronic circuit. Then I began with the final board layout in Eagle and ordered the other parts at reichelt: Three Potentiometers (two logarithmic ones), three nmos transistors, resistors, capacitors, push buttons, toggle and finally the power supply (5V2A).

I attached the parts to the new board and tested it with Georg who provided an oscillocope and even a 3D printer. I  designed and printed two plastic parts for the globe to hold the led and the board.

Also I changed the original design from a standing lamp to a laying one. I though it would look nicer and lighten a wall much better. Also the potentiometers and buttons were better accessible (It’s not a bug it’s a feature).

Everything looked fine but a random flashing which disturbed me. One user from the forum gave me hint that there could be something wrong with my code since he had the same problem in the past. So glad this guy read the post -> no flickering anymore!

During the project i learned a lot about the whole progress of implementing a vision. I really started with a vague idea and then made use of a great toolchain (3d printing, uC development, board layout in eagle, testing testing and testing with oscilloscope)… Also i must say that there were a lot of coincidences like a broken diode and a wrongly connected mosfet which i eventually hadn’t noticed in other places… Maybe you (Kenzie) remember that i was happy about the one day on skype? Yeah, i figured the board was actually working … so this was my last week; for you! be happy!

Looking forward to party in fading colors!

Reference:

https://github.com/kreuzUndQwertz/hsv2rgb (code and board layouts as well as the openscad 3d designs)

http://www.mikrocontroller.net/topic/262143#postform (thread on rgb space and flashing issue)