Difference between revisions of "TinyML"

From Wiki2
Line 1: Line 1:
= tinyML =
= tinyML =
==2.6==
During 2020, we have seen the release of Turing-NLG boasting a model size of 17 billion parameters. and GPT-3 natural language models  boasting a model size of 175 billion parameters.
==2.5==
==2.5==
stakeholders
stakeholders

Revision as of 19:13, 13 December 2020

tinyML

2.6

During 2020, we have seen the release of Turing-NLG boasting a model size of 17 billion parameters. and GPT-3 natural language models boasting a model size of 175 billion parameters.

2.5

stakeholders

Shoshana Suboff's work in "The Age of Surveillance Capitalism - The fight for a human future at the new frontier of power" has caused me to cringe at the classical formulation of as described in the "Who am I building this for" video. Susan Kennedy speaks of "The people who will potentially be impacted by the technology are referred to as the stakeholders. The direct stakeholders are the people who will use your technology otherwise known as the users." In Zuboff's view the users are the raw materials being extracted by the AI tech to create products customized to be able to mold the thinking and ideas of the 'users'. The stakeholders are not the users but the corporations, whacko political parties and such who are buying from big tech the ability to influence the users (victims).

In this formulation whether the tech tends to false positives or a false negatives is irrelevant. The big question is what is Alexa and Google doing with what they are hearing. How are they repackaging our data (words, emotions, feelings) to subsequently influence us. that is the true ethical question. We know way more about false positives and negatives from living through the pandemic. The trade-offs involved are a red herring.

tinyML seems suited for optimization and monitoring of our resource and energy use, health, water and sanitation systems. The patterns that are discovered could contribute to industrial and agricultural innovation. Cities and towns could more efficiently deliver services to citizens. But we must be vigilant and active in the conversation lest all the power and wealth ends up in the hands of corporations(in America) or oppressive states.

I agree with @steve_daniels that beyond technology we "need more empathy and compassion". Perhaps we should study what has happened to create such toxic discourse in America and how YouTube and Facebook AI tends to create models whose 'cost function' causes optimization of corporate profits at the expense of civil discourse.

I don't know when the UN 2030 goals were formulated but it seems to be from a time when we still had our head in the sand about racism and white supremacy. It seems OK to say 'gender equality' and 'reduced inequality' but nothing about race. But maybe that is just my USA experience of a place where people "voted, over and over again, to have worse lives? No healthcare, retirement, affordable education, childcare — no public goods of any kind whatsoever? White Americans did. What the? The question baffles the world. Why would anyone choose a worse life? The answer is that white Americans would not accept a society of true equals." https://eand.co/americas-problem-is-that-white-people-want-it-to-be-a-failed-state-bac24202f32f White supremacy and racism by the dominant "caste" seem to be contributing to a "failed state" in the US that permeates climate policy, Decent work and good health greater and growing wealth inequality.

https://ruder.io/optimizing-gradient-descent/

https://www.hackster.io/news/easy-tinyml-on-esp32-and-arduino-a9dbc509f26c

2.4

-- https://arxiv.org/pdf/1311.2901v3.pdf -- where the authors explore how a Convolutional Neural Network can ‘see’ features

For some code on how to create visualizations of your filters check out: https://keras.io/examples/vision/visualizing_what_convnets_learn/

2.2 Building Blocks

https://www.robotshop.com/en/arduino-nano-33-ble-microcontroller-with-headers.html

https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464

https://github.com/AIOpenLab/art

multi-layer neural networks

It is an oft state criticism of neural networks that it is impossible to know what they are doing. Here we get a taste of that. While I am sure there is some way to do a mathematical analysis on this simple 2 layer, 3 node network to figure out exactly how it came to those weights and biases, trying that on a big network is gonna be real hard. Reminds me of learning quantum physics. Once you get past hydrogen the math is really hard.

neurons in action

Comparing the FirstNeuralNetwork(FNN) to Minimizing-Loss(ML) it seems that 'sgd' in FNN is the part of ML's train function that uses the derivatives and the learning rate to create new values for `w` and `b`. It seems specifying the learning rate of .09 allows ML to converge quickly. As my BU professor said, if you have enough parameters to tweak you can accurately model almost anything. It is not obvious what learning rate the FNN is using. What is the magic sauce there?


corn worm
https://www.youtube.com/watch?v=23Q7HciuVyM
air quality
https://www.youtube.com/watch?v=9r2VVM4nfk8
retinopathy
https://www.youtube.com/watch?v=oOeZ7IgEN4o


Dense layer
Convolutional layers
contain filters that can be used to transform data. The values of these filters will be learned in the same way as the parameters in the Dense neuron you saw here. Thus, a network containing them can learn how to transform data effectively. This is especially useful in Computer Vision, which you’ll see later in this course. We’ll even use these convolutional layers that are typically used for vision models to do speech detection! Are you wondering how or why? Stay tuned!
Recurrent layers
learn about the relationships between pieces of data in a sequence. There are many types of recurrent layer, with a popular one called LSTM (Long, Short Term Memory), being particularly effective. Recurrent layers are useful for predicting sequence data (like the weather), or understanding text.

You’ll also encounter layer types that don’t learn parameters themselves, but which can affect the other layers. These include layers like dropouts, which are used to reduce the density of connection between dense layers to make them more efficient, pooling which can be used to reduce the amount of data flowing through the network to remove unnecessary information, and lambda lambda layers that allow you to execute arbitrary code.

https://www.hackster.io/news/easy-tinyml-on-esp32-and-arduino-a9dbc509f26c

@Timothy_Bonner on plants light and water


So I've just had a replacement heart valve put in. I have a $35 Mi band that periodically reads my heart rate and other data. I have guidelines from my doctor that tell me to keep my heart rate within 20-30 beats-per-minute of my resting rate during exercise and I have an exercise program from the physical therapist on how much I can do during each week of my recovery. Maybe tinyML could combine the static data doc/pt guidelines with the 'edge' data of heart rate, distance, speed, hills, breathing rate, recovery from exercise rate, weight to produce a gentle reminder that maybe I could speed up a bit or slow down or take a rest or get off my butt in some way that optimizes my recovery.

A. Which one of these cases do you find most concerning? Which concerns you the least? B. What do you consider to be the relevant ethical challenges? C. What do you think the designers of this technology could have done differently? D. How can you apply learnings from these examples to your own job? Your personal life? E. Do you agree or disagree with what others have already posted in the forum? Why?


surveillance capitalism is pervasive

The Nest fiasco is one example. When we had guests over and we were chatting in the living room and Alexa said "I am sorry i didn't quite hear that" is another. But the damage to civil society by the algorithms of Facebook, YouTube and Twitter are incalculable, hidden behind the wilful obfuscation and disregard for privacy law by big tech. It is a major flaw of the internet that we depend on big tech to authenticate us giving them unbridled access to our data and its use by AI to mold the very way we think. We need to authenticate ourselves and keep control and ownership of our own data. Tim Berners Lee would agree. If our esteemed teacher uses OK Google as an example for the 20th time without touching on the implications I think my head will explode. Why do we have to worship at the alter of big tech. IOT is their next frontier. We need to watch out.