Custom Search

## Tuesday, February 15, 2011

### IBM watson play Jeopardy

Watching IBM supercomputer watson play Jeopardy yesterday....

It's an amazing experience that using the state-of-the-art technology nowadays, people can build a machine to understand very complex questions in natural language and answer them correctly most of the time... As a CS students, the feeling that the area you are devoting your energy in is actually challenging the normal human being's intelligence is pretty good~~ Watson proved that it is promising.

Well, there are several basic things I still can not understand, or in other words, limitations.

1. Why they claim "watson" answered those Jeopardy questions by himself, just because it does not connected to the WWW. All the knowledge stored in the machine are from human beings. It still can not create new knowledge by observation and reading.

2. Just like playing Chess, playing Jeopardy proved that using technology the machine can understand questions  quite well...

3. HCI is really important to amuse people, look at watson's fancy avatar!

## Tuesday, February 1, 2011

### Machine Learning Reading Group note 1

Topic: Online EM for Unsupervised Models

At the end of the section, Hal raised a problem that: when using Online EM (incremental or stepwise), as we need another parameter to control the learning rate, what makes it better than simple gradient descent? ( As in gradient descent, we need a parameter: learning step while normal EM does not need an extra parameter)

And Zhongqiang Huang claimed that actually when we set

$\alpha = -1$

in $\eta_k = (k+2) ^ {-\alpha}$

The update step in stepwise EM is reduced to (almost):

$\mu = \mu + s$
Custom Search