Difference between revisions of "TinyML"

From Wiki2
Line 1: Line 1:
= tinyML =
= tinyML =
==2.4 ==
-- https://arxiv.org/pdf/1311.2901v3.pdf -- where the authors explore how a Convolutional Neural Network can ‘see’ features
==2.2 Building Blocks==
==2.2 Building Blocks==
https://www.robotshop.com/en/arduino-nano-33-ble-microcontroller-with-headers.html
https://www.robotshop.com/en/arduino-nano-33-ble-microcontroller-with-headers.html

Revision as of 17:16, 12 December 2020

tinyML

2.4

-- https://arxiv.org/pdf/1311.2901v3.pdf -- where the authors explore how a Convolutional Neural Network can ‘see’ features

2.2 Building Blocks

https://www.robotshop.com/en/arduino-nano-33-ble-microcontroller-with-headers.html

https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464

https://github.com/AIOpenLab/art

multi-layer neural networks

It is an oft state criticism of neural networks that it is impossible to know what they are doing. Here we get a taste of that. While I am sure there is some way to do a mathematical analysis on this simple 2 layer, 3 node network to figure out exactly how it came to those weights and biases, trying that on a big network is gonna be real hard. Reminds me of learning quantum physics. Once you get past hydrogen the math is really hard.

neurons in action

Comparing the FirstNeuralNetwork(FNN) to Minimizing-Loss(ML) it seems that 'sgd' in FNN is the part of ML's train function that uses the derivatives and the learning rate to create new values for `w` and `b`. It seems specifying the learning rate of .09 allows ML to converge quickly. As my BU professor said, if you have enough parameters to tweak you can accurately model almost anything. It is not obvious what learning rate the FNN is using. What is the magic sauce there?


corn worm
https://www.youtube.com/watch?v=23Q7HciuVyM
air quality
https://www.youtube.com/watch?v=9r2VVM4nfk8
retinopathy
https://www.youtube.com/watch?v=oOeZ7IgEN4o


Dense layer
Convolutional layers
contain filters that can be used to transform data. The values of these filters will be learned in the same way as the parameters in the Dense neuron you saw here. Thus, a network containing them can learn how to transform data effectively. This is especially useful in Computer Vision, which you’ll see later in this course. We’ll even use these convolutional layers that are typically used for vision models to do speech detection! Are you wondering how or why? Stay tuned!
Recurrent layers
learn about the relationships between pieces of data in a sequence. There are many types of recurrent layer, with a popular one called LSTM (Long, Short Term Memory), being particularly effective. Recurrent layers are useful for predicting sequence data (like the weather), or understanding text.

You’ll also encounter layer types that don’t learn parameters themselves, but which can affect the other layers. These include layers like dropouts, which are used to reduce the density of connection between dense layers to make them more efficient, pooling which can be used to reduce the amount of data flowing through the network to remove unnecessary information, and lambda lambda layers that allow you to execute arbitrary code.

https://www.hackster.io/news/easy-tinyml-on-esp32-and-arduino-a9dbc509f26c

@Timothy_Bonner on plants light and water


So I've just had a replacement heart valve put in. I have a $35 Mi band that periodically reads my heart rate and other data. I have guidelines from my doctor that tell me to keep my heart rate within 20-30 beats-per-minute of my resting rate during exercise and I have an exercise program from the physical therapist on how much I can do during each week of my recovery. Maybe tinyML could combine the static data doc/pt guidelines with the 'edge' data of heart rate, distance, speed, hills, breathing rate, recovery from exercise rate, weight to produce a gentle reminder that maybe I could speed up a bit or slow down or take a rest or get off my butt in some way that optimizes my recovery.

A. Which one of these cases do you find most concerning? Which concerns you the least? B. What do you consider to be the relevant ethical challenges? C. What do you think the designers of this technology could have done differently? D. How can you apply learnings from these examples to your own job? Your personal life? E. Do you agree or disagree with what others have already posted in the forum? Why?


surveillance capitalism is pervasive

The Nest fiasco is one example. When we had guests over and we were chatting in the living room and Alexa said "I am sorry i didn't quite hear that" is another. But the damage to civil society by the algorithms of Facebook, YouTube and Twitter are incalculable, hidden behind the wilful obfuscation and disregard for privacy law by big tech. It is a major flaw of the internet that we depend on big tech to authenticate us giving them unbridled access to our data and its use by AI to mold the very way we think. We need to authenticate ourselves and keep control and ownership of our own data. Tim Berners Lee would agree. If our esteemed teacher uses OK Google as an example for the 20th time without touching on the implications I think my head will explode. Why do we have to worship at the alter of big tech. IOT is their next frontier. We need to watch out.