TinyML
tinyML
python cool code
<syntaxhighlight lang="python" line='line'>
def CamelCaseToSnakeCase(camel_case_input):
"""Converts an identifier in CamelCase to snake_case.""" s1 = re.sub("(.)([A-Z][a-z]+)", r"\1_\2", camel_case_input) return re.sub("([a-z0-9])([A-Z])", r"\1_\2", s1).lower()
</syntaxhighlight'>
course2
I guess I am most intrigued about learning from time series data. A long time ago I was fascinated by Hidden Markov Models for predicting speech based on what has been 'said' so far. That a network can have short term memory and long term memory both influencing learning seems cool. Probably won't fit on an arduino though.
I have often thought that it would be possible to create a hands on learning course using these tiny devices for kids 13-17. I piloted and co-taught a algebra/geometry/physics 8omin/day course for 13-14 yr olds in an urban US public high school and got part way there. Being a coach for our schools First Robotics team got me a little further. It's tricky. You want to dive into AI/ML but you need to constantly connect to first principles of algebra and physics. It is a great and worthwhile challenge.
I am feeling a strong hangover from the past 9 months of Covid. I feel isolated. Taking this course helps alleviate those feelings.
Being from the USA, I am immersed in a culture that has many negative aspects. This course allows me to feel more connected to a worldwide community.
The content and focus of this course is important to me. AI on tinyML is a metaphor for how I would like to see technology connect to society as technology that is accessible to the small players using local data that we own.
Tim from Boston
where as I look out my window there is 15" of snow so far and the storm is raging. So is Covid. So happy to settle in with you all to learn about this cool topic. I used to build houses, now I like to build little smart 3d printed ESP powered devices.
tutorials
https://www.hackster.io/news/easy-tinyml-on-esp32-and-arduino-a9dbc509f26c
https://towardsdatascience.com/tensorflow-meet-the-esp32-3ac36d7f32c7
https://www.youtube.com/watch?v=AfAyHheBk6Y&list=PLtT1eAdRePYoovXJcDkV9RdabZ33H6Di0&index=3
I think everyone did a good job. I Feel the responsible AI part was the weakest and Susan Kennedy should read fellow Harvard academic Shoshana Zuboff's work.
The basic pedagogy is still rooted in 1990's methodology and data, now wrapped in google colab and tensorflow instead of java,c++ and matlab. I think the single neuron worked as a better 'hello world' than the perceptron of my time. I was surprised that backprop never made a real appearance and didn't love the black boxes of tensorflow.
Ai is like a rabbit hole of guesses. Doing the labs I had the feeling I should be replaced by an AI program that makes progressively better guesses on number and types of layers, loss functions and optimizers. That's OK, that part wasn't that interesting to me anyway.
2.6
During 2020, we have seen the release of Turing-NLG boasting a model size of 17 billion parameters. and GPT-3 natural language models boasting a model size of 175 billion parameters.
2.5
stakeholders
Shoshana Suboff's work in "The Age of Surveillance Capitalism - The fight for a human future at the new frontier of power" has caused me to cringe at the classical formulation of as described in the "Who am I building this for" video. Susan Kennedy speaks of "The people who will potentially be impacted by the technology are referred to as the stakeholders. The direct stakeholders are the people who will use your technology otherwise known as the users." In Zuboff's view the users are the raw materials being extracted by the AI tech to create products customized to be able to mold the thinking and ideas of the 'users'. The stakeholders are not the users but the corporations, whacko political parties and such who are buying from big tech the ability to influence the users (victims).
In this formulation whether the tech tends to false positives or a false negatives is irrelevant. The big question is what is Alexa and Google doing with what they are hearing. How are they repackaging our data (words, emotions, feelings) to subsequently influence us. that is the true ethical question. We know way more about false positives and negatives from living through the pandemic. The trade-offs involved are a red herring.
tinyML seems suited for optimization and monitoring of our resource and energy use, health, water and sanitation systems. The patterns that are discovered could contribute to industrial and agricultural innovation. Cities and towns could more efficiently deliver services to citizens. But we must be vigilant and active in the conversation lest all the power and wealth ends up in the hands of corporations(in America) or oppressive states.
I agree with @steve_daniels that beyond technology we "need more empathy and compassion". Perhaps we should study what has happened to create such toxic discourse in America and how YouTube and Facebook AI tends to create models whose 'cost function' causes optimization of corporate profits at the expense of civil discourse.
I don't know when the UN 2030 goals were formulated but it seems to be from a time when we still had our head in the sand about racism and white supremacy. It seems OK to say 'gender equality' and 'reduced inequality' but nothing about race. But maybe that is just my USA experience of a place where people "voted, over and over again, to have worse lives? No healthcare, retirement, affordable education, childcare — no public goods of any kind whatsoever? White Americans did. What the? The question baffles the world. Why would anyone choose a worse life? The answer is that white Americans would not accept a society of true equals." https://eand.co/americas-problem-is-that-white-people-want-it-to-be-a-failed-state-bac24202f32f White supremacy and racism by the dominant "caste" seem to be contributing to a "failed state" in the US that permeates climate policy, Decent work and good health greater and growing wealth inequality.
https://ruder.io/optimizing-gradient-descent/
2.4
-- https://arxiv.org/pdf/1311.2901v3.pdf -- where the authors explore how a Convolutional Neural Network can ‘see’ features
For some code on how to create visualizations of your filters check out: https://keras.io/examples/vision/visualizing_what_convnets_learn/
2.2 Building Blocks
https://www.robotshop.com/en/arduino-nano-33-ble-microcontroller-with-headers.html
https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464
https://github.com/AIOpenLab/art
multi-layer neural networks
It is an oft state criticism of neural networks that it is impossible to know what they are doing. Here we get a taste of that. While I am sure there is some way to do a mathematical analysis on this simple 2 layer, 3 node network to figure out exactly how it came to those weights and biases, trying that on a big network is gonna be real hard. Reminds me of learning quantum physics. Once you get past hydrogen the math is really hard.
neurons in action
Comparing the FirstNeuralNetwork(FNN) to Minimizing-Loss(ML) it seems that 'sgd' in FNN is the part of ML's train function that uses the derivatives and the learning rate to create new values for `w` and `b`. It seems specifying the learning rate of .09 allows ML to converge quickly. As my BU professor said, if you have enough parameters to tweak you can accurately model almost anything. It is not obvious what learning rate the FNN is using. What is the magic sauce there?
- corn worm
- https://www.youtube.com/watch?v=23Q7HciuVyM
- air quality
- https://www.youtube.com/watch?v=9r2VVM4nfk8
- retinopathy
- https://www.youtube.com/watch?v=oOeZ7IgEN4o
- Dense layer
- Convolutional layers
- contain filters that can be used to transform data. The values of these filters will be learned in the same way as the parameters in the Dense neuron you saw here. Thus, a network containing them can learn how to transform data effectively. This is especially useful in Computer Vision, which you’ll see later in this course. We’ll even use these convolutional layers that are typically used for vision models to do speech detection! Are you wondering how or why? Stay tuned!
- Recurrent layers
- learn about the relationships between pieces of data in a sequence. There are many types of recurrent layer, with a popular one called LSTM (Long, Short Term Memory), being particularly effective. Recurrent layers are useful for predicting sequence data (like the weather), or understanding text.
You’ll also encounter layer types that don’t learn parameters themselves, but which can affect the other layers. These include layers like dropouts, which are used to reduce the density of connection between dense layers to make them more efficient, pooling which can be used to reduce the amount of data flowing through the network to remove unnecessary information, and lambda lambda layers that allow you to execute arbitrary code.
https://www.hackster.io/news/easy-tinyml-on-esp32-and-arduino-a9dbc509f26c
@Timothy_Bonner on plants light and water
So I've just had a replacement heart valve put in. I have a $35 Mi band that periodically reads my heart rate and other data. I have guidelines from my doctor that tell me to keep my heart rate within 20-30 beats-per-minute of my resting rate during exercise and I have an exercise program from the physical therapist on how much I can do during each week of my recovery. Maybe tinyML could combine the static data doc/pt guidelines with the 'edge' data of heart rate, distance, speed, hills, breathing rate, recovery from exercise rate, weight to produce a gentle reminder that maybe I could speed up a bit or slow down or take a rest or get off my butt in some way that optimizes my recovery.
A. Which one of these cases do you find most concerning? Which concerns you the least? B. What do you consider to be the relevant ethical challenges? C. What do you think the designers of this technology could have done differently? D. How can you apply learnings from these examples to your own job? Your personal life? E. Do you agree or disagree with what others have already posted in the forum? Why?
surveillance capitalism is pervasive
The Nest fiasco is one example. When we had guests over and we were chatting in the living room and Alexa said "I am sorry i didn't quite hear that" is another. But the damage to civil society by the algorithms of Facebook, YouTube and Twitter are incalculable, hidden behind the wilful obfuscation and disregard for privacy law by big tech. It is a major flaw of the internet that we depend on big tech to authenticate us giving them unbridled access to our data and its use by AI to mold the very way we think. We need to authenticate ourselves and keep control and ownership of our own data. Tim Berners Lee would agree. If our esteemed teacher uses OK Google as an example for the 20th time without touching on the implications I think my head will explode. Why do we have to worship at the alter of big tech. IOT is their next frontier. We need to watch out.