Last week I have visited the Pioneers festival in Vienna. This was also the first public presentation of the new eClub startup X.GLU.
The X.GLU startup has developed a revolutionary glucometer called X.GLU. It is the smallest glucose meter, it is the size of a credit card and simply slips to your wallet. X.GLU requires no batteries and no wires to read the sugar level on your smartphone. As long as your smartphone is charged, the glucose meter works. No maintenance required. The X.GLU uses a standard connector for a biomedical sensor paper. It comes in a convenient bag along with disinfection tissues, lancets, and testing strips. The read out is transmitted by the NFC technology providing a secure wireless link between X.GLU and the smartphone. Unlike Bluetooth, Wi-Fi, and similar wireless technology, the NFC cannot be sniffed from a distance of more than several inches. The measured values are displayed and stored in the smartphone. An encrypted connection sends the X.GLU data in a cloud and makes it available to physicians providing instant feedback in treatment.
The smartphone app comes with a how-to video. It shows detailed instructions on, how to treat the skin before taking the sample and the method for properly taking the blood sample. The app also conveniently reminds the user about the scheduled measurement time.
The mastermind of the new company is the inventor and owner Marek Novak, who came with the idea of glucometer. Marek is one of the most active students in eClub. He has worked already on several IoT-related projects, but X.GLU is the first one we want to get to production. eClub helped in complementing Marek’s knowledge and found experts in sales and marketing to create a functional company. They start their operation from our scientific incubator.
It is great news for eClub. We all will try to do our best helping to start a productive and successful path to market. We are looking for other students teams with startup ideas. Join us during the eClub Summer Camp.
Tuesday, May 31, 2016
Tuesday, May 10, 2016
Our projects part 2
This is the second part of “What we do” this time about the IoT activities.
Our IoT effort can be roughly divided into two parts SW infrastructure and sensors. We use the standard IoT architecture combining an HUB and a cloud server. It is a typical IoT system setup allowing to collect the sensors information and control actuators over the Internet. The architecture uses an HUB. It serves as a gateway to the Internet and concentrator for the sensor data. The HUB is a simple computer with a similar power as a router equipped with Ethernet or WiFi or both to connect to the Internet. In addition, it may have several other radios for the sensor, actuator communication. The radios are continuously listening to sensors and this typically requires power, therefore, HUBs are usually not powered from a battery.
Our HUB is based on the Intel Edison dual core 500 Mhz Linux-based embedded computer with WiFi, BLE, and 868MHz free band radio. It also includes three USB sockets for additional peripherals. The HUB is running a simple Node JS server Zetta. This server is handling the management and communication with servers. It allows a seamless connection to similar Zetta server residing in the Cloud. The linked Zetta servers communicate using a walkable, JSON based, hypermedia Siren. The cloud-based servers allow a simple connection to the smartphone. The hypermedia Siren allows the smartphone to set the UI based on the configuration of a particular space covered by an HUB. We have designed and implemented an IoT control app for Android and it configures based on location. In practice, it means as soon as you get to a smart room or to your car the Android home page sets for the particular environment with the most frequently used control on top.
We do not use the WiFi for communicating with sensors. WiFi usually requires a lot of battery power and it is primarily designed for TCP/IP protocol, which may not be required for the very simple sensors such as thermometers. The thermometer is sampling the environment temperature for example only every 10 min and therefore we can let the sensor sleep most of the time. The radio is waking up only for the shortest possible communication required to exchange information with the HUB. This approach is allowing us to design sensors with very low energy requirements.
The low sensors consumption allowed us to use one of the energy harvesting approaches, the Photovoltaic Cells. We have designed and put together a set of PV powered battery-less and wireless sensors. We can measure temperature, humidity, motion (accelerometers and PIR). The PIR equipped sensor is powered just from a fluorescent tube on the ceiling and it is sensing people coming to our lab for more than one year. We are monitoring the PV accumulated energy and we have so far never run out of power. We use the accelerometer-equipped sensors to check for open windows. The outside light is also good enough to provide enough juice. The sensors communicate with the HUB using 868MHz radios. We have found this band more resistant to objects than the WiFi or Bluetooth. Currently, we use a proprietary protocol, but we are looking for LoRa and MQTT, which we use in other projects with the same HUB.
Some of the described work is part of the Bachelor’s thesis written by my students. We are looking forward to pushing our work even more ahead during eClub Summer Camp 2016.
Tuesday, May 3, 2016
Our projects
Recently I was asked to review the latest development in our group and I realised how much work we have done. I have also noticed I am forgetting about my blog. Let’s fix it!
First the best news, our group has grown during the last year to five PhD and around 10 MSc students working in machine learning.
Today I would like to start with part one and mention some of our progress in machine learning. In the second part I will describe our IoT effort. The main machine learning topics can be broken in the following categories:
- Natural Language Processing
- Question answering YodaQA
- Intelligent assistants
- Sentence pair similarity
- Multinomial classification
- Information retrieval
- Learning to rank
- Information extraction
- Focused crawling
- Convolutional Neural Networks
- Combining text and images
- Image labeling
I’ll start with the major achievement, the YodaQA answering machine. It is an open source question answering system. It implements state-of-art methods of information extraction and natural language understanding — to answer human-phrased questions! You can try the live demo.
Along with YodaQA we have worked also on simpler Intelligent Assistants acting on a smaller number of commands. They take advantage from simpler algorithms finding the most similar answer for a given query. The sentence pair similarity is another topic of interest. The algorithms can help solving not only Answer Sentence Selection, but also other interesting problems, such as Next Utterance Ranking, Semantic Textual Similarity, Paraphrase Identification, Recognizing Textual Entailment etc. We have tested and developed a series of algorithms based on word embeddings and different architecture of Neural Networks.
To the NLP category belongs also the multinomial classification algorithm. The use case we are testing is the products categorization to a hierarchical directory structure. Typically the e-shops are categorizing products, such as a “14 inch screen notebook” under notebooks, computers, electronics etc. This process is handled by human beings and they do mistakes, our algorithm can find problematically categorized entries or suggest correct category.
An exclusive position in our group has the adaptive ranking research. The web content, its information relevancy, authority and the users interests are changing constantly. The goal of any search engine is to provide exactly what the users look for. The newly developed algorithm relies on users to find currently the best ranking. It constantly observes on what links are users clicking and it adapts based on this feedback. This is very relevant for information search and recommendation services.
Information extraction is the next topic completing our portfolio. Initially we have looked at basics of focused crawling, a strategy how to crawl internet and extract for example all mentions about AT&T and Linux. This leads to a design of a crawler with programmable search policy. Currently we work on even more sophisticated algorithm for extracting content from e-shop pages, segmentation, price, product name etc. extraction. These are known problems typically solved semi-manually constructing scripts and then running the extraction. Our goal is a high accuracy, general algorithms without any customization or training working for all e-shops.
In the part two of this blog I will review our efforts in the Internet of Things.
Subscribe to:
Posts (Atom)