TinyML

From Wiki2
Revision as of 11:59, 16 March 2021 by Tim (talk | contribs) (→‎tinyML)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

tinyML

while it is easy to benchmark tensorlite models via colab, what standards or can be used to benchmark device performance? ex:KWS on nano33: would the benchmark be an audio file that you play to its microphone? Will there be extensions to the code to create a confusion matrix or some measure?

troubleshooting

@bagwas thereafter the accuracy was really good
@H_Scholten The idea is to trigger the recording with the switch on the board and record a word.
@dizgotti, it seems that you have dug into the code a bit to find "failed to detect".
@brian_plancher In terms of adding volume requirements to the website -- I will definitely look into it
@vjreddi it shouldn't be really bad for yes/no. It should definitely work.
@stephenbaileymagic I believe that I will need to do a lot of tweaks to the whole pipeline to improve real world performance.
@stephenbaileymagicit doesn’t test the micro model after converting

on poor KWS performance: tweak the whole pipeline?

I decided `ProcessLatestResults` was sending stuff to `RespondToCommand` that was driving me crazy and hiding data that I wanted to see. I put

voice recognition project

https://www.mathworks.com/help/audio/ug/speaker-identification-using-pitch-and-mfcc.html

https://github.com/pannous/tensorflow-speech-recognition

pete's audio dataset

https://github.com/gigwegbe/tinyml-papers-and-projects

https://discuss.tinymlx.org/c/projects/5

The physical shape of the vocal tract is the primary physiological component. Voice verification technology works by converting a spoken phrase from analog to digital format and extracting the distinctive vocal characteristics, such as pitch, cadence, and tone, to establish a speaker model or voiceprint. A template is then generated. Voice verification systems can be text dependent, text independent, or a combination of the two. a "pass phrase," can be a piece of information such as a name, birth city, favorite color or a sequence of numbers.

https://www.globalsecurity.org/security/systems/biometrics-voice.htm

Voice recognition is much more specific and requires significantly more processing and analysis than speech recognition. Text-dependent requires less data, and the utterance of a fixed, predetermined phrase improves authentication performance, Speech samples are waveforms with time on the horizontal axis and loudness on the vertical access. In speaker recognition systems these samples are converted from an analog format to a digital format. Then, the features of the individual’s voice are extracted and a voice model is created. These individual models are represented based on the underlying variations and temporal changes over time found in the speech state, such as the quality, duration, intensity dynamics, and pitch of the signal. These models are used to compare the similarities and differences between the input voice and the stored voice “states” to produce a recognition decision. Most “text dependent” speaker verification systems use the concept of Hidden Markov Models (HMMs) that provides a statistical representation of the sounds produced by the individual. Another method is the Gaussian Mixture Model, a state-mapping model closely related to HMM, that is often used for unconstrained “text independent” applications.

Current research introduces the concept of an end-to-end neural speaker recognition system that works well for both text-dependent and text-independent scenarios. This means that the same system can be trained to recognize who is speaking either when you say an awake word to activate your home assistant or when you’re speaking in a meeting. Most of the recent research consists of deep neural network layers inspired by ResNet and recurrent models to extract acoustic features.

voice recognition is not considered as a very distinctive biometric trait and may not be appropriate for large-scale identification.

https://www.veridiumid.com/my-voice-is-my-password-the-power-of-voice-as-a-biometric/

ations. Our code is available in a public repository at https://github.mit.edu/jodiec/6857 and https://github.mit.edu/ashoup/dejavuDemo.


https://courses.csail.mit.edu/6.857/2016/files/31.pdf

post to projects

    1. 'open sesame' door latch from KWS to SRE(speaker recogniton)

Authentication by voice on a local device is worth exploring. Current methods use FFT. Maybe our code could be adapted to recognize a person by their voice in a small micro model.

goal:

  • where: for the door to my shop and the back door of my house
  • when: I am carrying something with both hands that I don't want to put down
  • what: say 'open sesame' and have the latch release so I can push open the door

collect data:

  • It only needs to work for a few people and it is better if it doesn't work for anyone else.
  • Maybe figure out a way to collect samples with a parallel system running at the door, recording the phrase and other talk and shipping it out to some more powerful server to collect and use for training.
  • Iterate models. Someday there will be enough data and the door will open.

revison:

  • Eventually it might work. Then it will need some intermediary layer. Maybe that layer won't be suitable to work on the device.
  • Add person recognition layer, either visual or auditory.
  • Until that layer is compactly smart, it could do a long loop to the server. If the server doesn't agree it could set off alarms.
  • long term goal: approval from @SecurityGuy

hardware:

  • a $24, 12v door latch
  • a 120v,12v transformer
  • 3.3v relay
      1. alternate approaches:

> Perhaps it is possible to leverage an audio phrase database to train a person authentication model. Maybe forget 'open sesame' as a training phrase. Maybe use something real common, a bajillion recordings of 'hello' for instance. I would have say hello a thousand times. Can the model learn which one is me?

> If that works, how could I make it work on an uncommon word that will open my door? Maybe take all the audio data and reverse it in time, like playing a the Beatles White Album backwards to hear 'Paul is dead'. Instead of 'hello', 'olleh' opens my door. Train on that and build another model.

datasets

https://github.com/mlcommons/tiny.

https://archive.ics.uci.edu/ml/datasets.php

https://arxiv.org/abs/1908.03299 anomaly detection

    1. references & discussion:

https://keras.io/examples/audio/speaker_recognition_using_cnn/

>"Internet elders such as MIT’s David D. Clark argue that authentication – verification of the identity of a person or organization you are communicating with – should be part of the basic architecture of a new Internet...(Tim Berners Lee) proposed a simple authentication tool – a browser button labeled “Oh, yeah?” that would verify identities" https://www.technologyreview.com/2005/12/01/230006/click-oh-yeah/

For the Internet of Things (IOT) authentication and ownership of data are key issues that must be faced now, before we have hundreds of microcontrollers in our homes sending data to unnamed servers and big tech, claiming the data as their own, invading our homes. Recently, in a Independent Activities Period (IAP) class at MIT, Tim Berners Lee admitted to making a big mistake when he wrote HTML . There should have been authentication built in.

Recently I had to call Fidelity a lot since I was the executor of my mom's will. I said a short phrase just a few times, over the phone and now I am authenticated as soon as they hear my voice. What can we do on a microcontroller?

    1. request for team

This project seems right for a team. At the core it will require lots of experimentation and testing of models spanning the whole gamut of tinyML develpment as laid out by Vijay Janapa Reddi and the rest of the leaders and staff of the edX tinyML classes.

Team members could compare their selection of FFT variation, pre-processing, model architecture and inference implementation.

Team could compare and evaluate publically available audio data or models that we could piggy back on.

Team members could generate and share voiceprint data and experiment on optimal phrase type for voice authentication.

Team could develop a hardware/software subsystem for collecting voice data and storing on a server accessible to the whole team.

Team could discuss, reach consensus or produce multiple iterations of an actual hardware/software implemntation.

I am very excited and ready to go. Please consider working with me on this or some pivot of this project idea. What do you think?


Sounds great:)

> "Hollywood is mixing sound for drama, not clarity. We recently Googled the phase "hard to understand dialogue movies" and got 162 million hits. Hollywood is using more adventurous audio mixing techniques – often resulting in unclear dialogue....By combining compression, consonant-range boost, formant enhancement, minimization of bass output -- plus some proprietary techniques we would rather not divulge, the ZVOX AccuVoice system creates outstanding results"

I got my wife a TV soundbar from them for Christmas. We no longer need subtitles. Maybe those "proprietary techniques" they are talking about involve a neural network model.

https://zvox.com/collections/accuvoice


https://keras.io/examples/audio/speaker_recognition_using_cnn/

'open sesame' door latch discussion posted about an hour ago by mcktim

'open sesame' door latch from KWS to SRE(speaker recogniton)

Authentication by voice on a local device is worth exploring. Current methods use FFT. Maybe our code could be adapted to recognize a person by their voice in a small micro model.

goal:

where: for the door to my shop and the back door of my house when: I am carrying something with both hands that I don't want to put down what: say 'open sesame' and have the latch release so I can push open the door collect data:

It only needs to work for a few people and it is better if it doesn't work for anyone else. Maybe figure out a way to collect samples with a parallel system running at the door, recording the phrase and other talk and shipping it out to some more powerful server to collect and use for training. Iterate models. Someday there will be enough data and the door will open. revison:

Eventually it might work. Then it will need some intermediary layer. Maybe that layer won't be suitable to work on the device. Add person recognition layer, either visual or auditory. Until that layer is compactly smart, it could do a long loop to the server. If the server doesn't agree it could set off alarms. hardware:

a $24, 12v door latch a 120v,12v transformer 3.3v relay alternate approaches:

Perhaps it is possible to leverage an audio phrase database to train a person authentication model. Maybe forget 'open sesame' as a training phrase. Maybe use something real common, a bajillion recordings of 'hello' for instance. I would have say hello a thousand times. Can the model learn which one is me?

If that works, how could I make it work on an uncommon word that will open my door? Maybe take all the audio data and reverse it in time, like playing a the Beatles White Album backwards to hear 'Paul is dead'. Instead of 'hello', 'olleh' opens my door. Train on that and build another model.

references:

https://keras.io/examples/audio/speaker_recognition_using_cnn/

"the difficulty associated with procuring large quantities of data. This procedure is aptly named dataset engineering" or 'sexy AI machine learning job' or being a mechanical turk (soon to be replaced by meta-AI. "data provenance. We must be cognizant of the origin of our data" and consult experts on how they have managed to take our data. "radiologists may be necessary" for a little while longer.

3-1.6

3-1.5 Forum: Pretrained KWS Model

testing myself: trying to load my saved model

I found a stopgo_model.pb that I save from course 2 and asked myself how do I load it even though it is not .tflite? I tried referencing https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/save_and_load.ipynb

 %tensorflow_version 1.x
 import tensorflow as tf
 from tensorflow import keras
 print(tf.version.VERSION)

 from google.colab import files
 uploaded = files.upload()
 !unzip model-stop-go.zip &> 0
 new_model = tf.keras.models.load_model('model-stop-go')

I get:

 WARNING:tensorflow:Unable to create a python object for variable <tf.Variable 'first_weights:0' shape=(10, 8, 1, 8) dtype=float32_ref> because it is a reference variable. It may not be visible to training APIs. If this is a problem, consider rebuilding the SavedModel after running tf.compat.v1.enable_resource_variables().




   %tensorflow_version 1.x
   import tensorflow as tf
   !wget https://github.com/tensorflow/tensorflow/archive/v2.4.1.zip
   !unzip v2.4.1.zip &> 0
   !mv tensorflow-2.4.1/ tensorflow/
   import sys
   # We add this path so we can import the speech processing modules.
   sys.path.append("/content/tensorflow/tensorflow/examples/speech_commands/")


   from google.colab import files
   uploaded = files.upload()
   float_converter = tf.lite.TFLiteConverter.from_saved_model('stopgo_model.pb')

Also tried '/content/stopgo_model.pb' but neither worked.

I hate to think this portends a total helplessness once I am not in a class. Any hints?

3-1.3

https://github.com/tinyMLx/appendix/blob/main/BLEOverview.md

YOUR PLANS FOR THIS COURSE

I plan to sit at my desk in my study for 2-4 hours each weekday either working directly on the course content or in shoring up my foundations where needed so I can get most from the content.


I will make it a habit and have it be the center of the field of study for the duration. I will work hard at creating a programming environment that lends itself to an efficient workflow.


I will turn to the community to see how other members of the class are dealing with problems in colab or tensorflow or c++.


welcome back course 3

back to thinking in AI

I am very happy to be continuing. These courses have brought me back to thinking in AI. It is very different from procedural programming. The prior 2 courses have eased me back into the AI frame of mind. TinyML gives me hope too, that AI is not just for the mega tech corporations who own all our data continually crunching it with their power-hungry data centers. We will see if that is true, maybe even be connected to a movement toward a more personal, community based AI.

I was already a little sad that this much anticipated final course, though starting, would soon end. It is very exciting to see a horizon stretching through August and beyond with the project challenge. I hope to connect with some in this amazing worldwide community on a project idea.

I thank Vijay Reddi, Laurence Moroney, Susan Kennedy and all the talented staff for leading the way. I thank MIT, who for many decades, from, when they hosted the Lowell institute and I got to program on those cards with devices like the 6502 Apple1 chip, to, years and years of being allowed to sneak in to IAP (independent activity project) classes in the dark of winter.

responsible AI development

Under the equal accuracy model the 'actual yes' (true positives) are similar for both males and females and the 'actual no' (true negatives) are similar for both males and females.

The problem is that for men, 21% are denied loans even though they actually deserve them and 20.4% of females get loans who probably should not.

Costly to whom? The bank potentially loses (by false positives) by lending to women but the women gain by having an opportunity to have a loan. Men who don't get loans due to the high 'predicted no' (false negative) lose out and so do the banks who had a good chance of making money on those loans.


group unawareness

Which form of bias did you find most surprising or interesting? Which form of bias do you think poses the most serious challenge to TinyML? Which solutions for bias (either industry or individual designer) do you think are most promising? Are there any obstacles you can foresee for the solutions you think are most promising?

industry or indivdual

I think the biggest obstacles will continue to be our lack of control over our own data. We give up that control every time we click that 'agree' button when we download a new app. These academic datasets are just window dressing on a systematic effort by big tech use us as a natural resource, mining our every move and gesture, mood, thought and idea to monetize to the highest bidder so that those bidders are better able to mold our ideas, beliefs and emotions. We might end up too dumb to even control our own data even if given the chance.

"that each person has their own opinions and way of seeing the data" may be at the heart of it. As a society, we seem to have lost our sense of a 'common good'. With the help of big tech and social media we are balkanized into groups whose biases are played upon, exaggerated and profited by. Is it even possible to do bias-free AI in a world like ours?


https://www.usatoday.com/story/tech/2020/01/06/voice-assistants-remain-out-reach-people-who-stutter/2749115001/

https://sites.research.google/euphonia/about/

https://venturebeat.com/2020/06/17/research-shows-non-native-english-speakers-struggle-with-voice-assistants/

https://www.theverge.com/2020/3/24/21192333/speech-recognition-amazon-microsoft-google-ibm-apple-siri-alexa-cortana-voice-assistant

https://www.businessinsider.com/amazon-ai-biased-against-women-no-surprise-sandra-wachter-2018-10

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

automate bias and inequality and deploy it at scale

2_1-8 anomaly Detection

https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall

https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d

sucker for visualizations

Well K-means and K-means plus dimension reduction techniques has the advantage of being pretty easy to understand, more like procedural programming. But then the whole point for me taking this course is to take me out of my comfort zone. Autoencoders remain a little more of a black box to me; 'get used to it', I tell myself.

As for thresholds, making a choice like a 99% or 1std distance seems arbitrary. An ROC curve seems more intuitive. Choose the knee or not depending on your tolerance for false positives (false alarms) or missed positives. Its the same dilemma for Fauci et al and the little boy who cried wolf.

I can think of an IOT device case where missed positives don't matter too much. If you are taking sensor readings in an endless loop and deciding to activate something based on the trends in the data, missing a few readings/inferences won't really hurt you.



https://arxiv.org/pdf/1909.09347.pdf

https://iq.opengenus.org/applications-of-autoencoders/

https://blog.keras.io/building-autoencoders-in-keras.html

https://towardsdatascience.com/step-by-step-understanding-lstm-autoencoder-layers-ffab055b6352

https://blog.keras.io/building-autoencoders-in-keras.html

https://www.tensorflow.org/tutorials/generative/autoencoder


maybe you don't need visual wake word NN's

Back to the entrance camera example: Perhaps some simple to compute kmeans or knn calculation could detect the presence of a new object in the camera's field that is fed into a low power core processor. That could wake up the more powerful core that could then do the heavy lifting of talking to the server.

Whilst these uses might seem unethical, especially to privacy advocates, they are entirely legal and abide by the ethical conventions informed consent; Maximize probable benefits and minimize probable harms be transparent in methods and results. Be accountable uphold the ethical foundations set by our forebearers.

2_1-7visual wake words

it might be easier

We were already processing sound images in the last for KWS so that part will be similar. If it is a fixed camera it might be easier. Perhaps we could onboard the device by having the customer mount it and send in an image with nobody there. We could (on the server) do edge detection, add that as features and then create synthetic images overlaying an object representation of a person pulled from some object database and train on that. We would probably have to tell the customer we'll get back to them once their custom model is done. Maybe we could do "person", "package", "dog", "nothing". If you moved the camera somewhere else you would have to start all over.

2_1-6 make data collection easier?

leverage your IOT device users

You might encourage participation in a data collection project related to your application by an onboarding process that collects data related to that particular device located at that particular location with those particular users. Perhaps then you could entice users to share their data with the rest of the people who have the same device/app maybe by unlocking additional features for those who agree to share their (properly anonymized) data.


https://commonvoice.mozilla.org/en

automatic parsing and taking advantage of the limited IOT environments

Collecting data for limited environments would seem easier. You don't necessarily need all the accents and voices of the world for KWS for example. Each IOT device has its own location and its own local environment. In deploying a device you may need to develop a onboarding process limited to the users at that location. Perhaps users would choose their own keywords then read a script into a a data collection app that PARSES individual words, does pre-processing for training and then EITHER adds that data to some available dataset OR or uses a model already trained up and additionally trains on the new data. Out pops a custom model for that device at that location.

colab

how to save editable versions

  • copy to google drive
  • rename to Copy of to TinyML-
  • save a copy as github jist

how-to-save-our-model-to-google-drive-and-reuse-it

https://blog.roboflow.com/how-to-save-and-load-weights-in-google-colab/

unsupervised

"The bulk of what is learned by an algorithm must consist of understanding the data itself, rather than applying that understanding to particular tasks"

https://deepmind.com/blog/article/unsupervised-learning

tensorflow micro

https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/micro/examples/hello_world#deploy-to-esp32

esp32

esp-tensorflow

https://github.com/espressif/tensorflow/tree/master/tensorflow/lite/micro/examples

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/micro/examples/person_detection/esp/README_ESP.md

esp-who

https://robotzero.one/esp-who-recognition-with-names/

https://github.com/espressif/esp-who/blob/master/docs/en/get-started/ESP-EYE_Getting_Started_Guide.md

esp-idf

https://medium.com/@dmytro.korablyov/first-steps-with-esp32-and-tensorflow-lite-for-microcontrollers-c2d8e238accf

https://diyprojects.io/esp32-getting-started-esp-idf-ide-arduino-macos-windows-linux/

https://docs.espressif.com/projects/esp-idf/en/latest/esp32/get-started/index.html

on wsl2

https://gist.github.com/abobija/2f11d1b2c7cb079bec4df6e2348d969f

https://robdobson.com/2020/10/installing-esp-idf-on-wsl2/

https://www.instructables.com/ESP32-Development-on-Windows-Subsystem-for-Linux/

esp32-eye

https://blog.tensorflow.org/2020/08/announcing-tensorflow-lite-micro-esp32.html

https://blog.tensorflow.org/2020/08/announcing-tensorflow-lite-micro-esp32.html

https://thingpulse.com/esp32-how-to-use-psram/

https://www.hackster.io/news/face-detection-and-recognition-on-the-esp32-3b4b9a35c765

https://github.com/espressif/esp-who

http://cn.arxiv.org/pdf/1604.02878v1 http://cn.arxiv.org/abs/1604.02878

https://arxiv.org/pdf/1801.04381 https://arxiv.org/abs/1801.04381

python cool code

<syntaxhighlight lang="python">

def CamelCaseToSnakeCase(camel_case_input):

 """Converts an identifier in CamelCase to snake_case."""
 s1 = re.sub("(.)([A-Z][a-z]+)", r"\1_\2", camel_case_input)
 return re.sub("([a-z0-9])([A-Z])", r"\1_\2", s1).lower()

</syntaxhighlight>

course2

https://medium.com/@d3lm/understand-tensorflow-by-mimicking-its-api-from-scratch-faa55787170d

I guess I am most intrigued about learning from time series data. A long time ago I was fascinated by Hidden Markov Models for predicting speech based on what has been 'said' so far. That a network can have short term memory and long term memory both influencing learning seems cool. Probably won't fit on an arduino though.

I have often thought that it would be possible to create a hands on learning course using these tiny devices for kids 13-17. I piloted and co-taught a algebra/geometry/physics 8omin/day course for 13-14 yr olds in an urban US public high school and got part way there. Being a coach for our schools First Robotics team got me a little further. It's tricky. You want to dive into AI/ML but you need to constantly connect to first principles of algebra and physics. It is a great and worthwhile challenge.

I am feeling a strong hangover from the past 9 months of Covid. I feel isolated. Taking this course helps alleviate those feelings.

Being from the USA, I am immersed in a culture that has many negative aspects. This course allows me to feel more connected to a worldwide community.

The content and focus of this course is important to me. AI on tinyML is a metaphor for how I would like to see technology connect to society as technology that is accessible to the small players using local data that we own.

Tim from Boston

where as I look out my window there is 15" of snow so far and the storm is raging. So is Covid. So happy to settle in with you all to learn about this cool topic. I used to build houses, now I like to build little smart 3d printed ESP powered devices.

tutorials

https://www.hackster.io/news/easy-tinyml-on-esp32-and-arduino-a9dbc509f26c

https://towardsdatascience.com/tensorflow-meet-the-esp32-3ac36d7f32c7

https://www.youtube.com/watch?v=AfAyHheBk6Y&list=PLtT1eAdRePYoovXJcDkV9RdabZ33H6Di0&index=3

I think everyone did a good job. I Feel the responsible AI part was the weakest and Susan Kennedy should read fellow Harvard academic Shoshana Zuboff's work.

The basic pedagogy is still rooted in 1990's methodology and data, now wrapped in google colab and tensorflow instead of java,c++ and matlab. I think the single neuron worked as a better 'hello world' than the perceptron of my time. I was surprised that backprop never made a real appearance and didn't love the black boxes of tensorflow.

Ai is like a rabbit hole of guesses. Doing the labs I had the feeling I should be replaced by an AI program that makes progressively better guesses on number and types of layers, loss functions and optimizers. That's OK, that part wasn't that interesting to me anyway.

2.6

During 2020, we have seen the release of Turing-NLG boasting a model size of 17 billion parameters. and GPT-3 natural language models boasting a model size of 175 billion parameters.

2.5

stakeholders

Shoshana Suboff's work in "The Age of Surveillance Capitalism - The fight for a human future at the new frontier of power" has caused me to cringe at the classical formulation of as described in the "Who am I building this for" video. Susan Kennedy speaks of "The people who will potentially be impacted by the technology are referred to as the stakeholders. The direct stakeholders are the people who will use your technology otherwise known as the users." In Zuboff's view the users are the raw materials being extracted by the AI tech to create products customized to be able to mold the thinking and ideas of the 'users'. The stakeholders are not the users but the corporations, whacko political parties and such who are buying from big tech the ability to influence the users (victims).

In this formulation whether the tech tends to false positives or a false negatives is irrelevant. The big question is what is Alexa and Google doing with what they are hearing. How are they repackaging our data (words, emotions, feelings) to subsequently influence us. that is the true ethical question. We know way more about false positives and negatives from living through the pandemic. The trade-offs involved are a red herring.

tinyML seems suited for optimization and monitoring of our resource and energy use, health, water and sanitation systems. The patterns that are discovered could contribute to industrial and agricultural innovation. Cities and towns could more efficiently deliver services to citizens. But we must be vigilant and active in the conversation lest all the power and wealth ends up in the hands of corporations(in America) or oppressive states.

I agree with @steve_daniels that beyond technology we "need more empathy and compassion". Perhaps we should study what has happened to create such toxic discourse in America and how YouTube and Facebook AI tends to create models whose 'cost function' causes optimization of corporate profits at the expense of civil discourse.

I don't know when the UN 2030 goals were formulated but it seems to be from a time when we still had our head in the sand about racism and white supremacy. It seems OK to say 'gender equality' and 'reduced inequality' but nothing about race. But maybe that is just my USA experience of a place where people "voted, over and over again, to have worse lives? No healthcare, retirement, affordable education, childcare — no public goods of any kind whatsoever? White Americans did. What the? The question baffles the world. Why would anyone choose a worse life? The answer is that white Americans would not accept a society of true equals." https://eand.co/americas-problem-is-that-white-people-want-it-to-be-a-failed-state-bac24202f32f White supremacy and racism by the dominant "caste" seem to be contributing to a "failed state" in the US that permeates climate policy, Decent work and good health greater and growing wealth inequality.

https://ruder.io/optimizing-gradient-descent/


2.4

-- https://arxiv.org/pdf/1311.2901v3.pdf -- where the authors explore how a Convolutional Neural Network can ‘see’ features

For some code on how to create visualizations of your filters check out: https://keras.io/examples/vision/visualizing_what_convnets_learn/

2.2 Building Blocks

https://www.robotshop.com/en/arduino-nano-33-ble-microcontroller-with-headers.html

https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464

https://github.com/AIOpenLab/art

multi-layer neural networks

It is an oft state criticism of neural networks that it is impossible to know what they are doing. Here we get a taste of that. While I am sure there is some way to do a mathematical analysis on this simple 2 layer, 3 node network to figure out exactly how it came to those weights and biases, trying that on a big network is gonna be real hard. Reminds me of learning quantum physics. Once you get past hydrogen the math is really hard.

neurons in action

Comparing the FirstNeuralNetwork(FNN) to Minimizing-Loss(ML) it seems that 'sgd' in FNN is the part of ML's train function that uses the derivatives and the learning rate to create new values for `w` and `b`. It seems specifying the learning rate of .09 allows ML to converge quickly. As my BU professor said, if you have enough parameters to tweak you can accurately model almost anything. It is not obvious what learning rate the FNN is using. What is the magic sauce there?


corn worm
https://www.youtube.com/watch?v=23Q7HciuVyM
air quality
https://www.youtube.com/watch?v=9r2VVM4nfk8
retinopathy
https://www.youtube.com/watch?v=oOeZ7IgEN4o


Dense layer
Convolutional layers
contain filters that can be used to transform data. The values of these filters will be learned in the same way as the parameters in the Dense neuron you saw here. Thus, a network containing them can learn how to transform data effectively. This is especially useful in Computer Vision, which you’ll see later in this course. We’ll even use these convolutional layers that are typically used for vision models to do speech detection! Are you wondering how or why? Stay tuned!
Recurrent layers
learn about the relationships between pieces of data in a sequence. There are many types of recurrent layer, with a popular one called LSTM (Long, Short Term Memory), being particularly effective. Recurrent layers are useful for predicting sequence data (like the weather), or understanding text.

You’ll also encounter layer types that don’t learn parameters themselves, but which can affect the other layers. These include layers like dropouts, which are used to reduce the density of connection between dense layers to make them more efficient, pooling which can be used to reduce the amount of data flowing through the network to remove unnecessary information, and lambda lambda layers that allow you to execute arbitrary code.

https://www.hackster.io/news/easy-tinyml-on-esp32-and-arduino-a9dbc509f26c

@Timothy_Bonner on plants light and water


So I've just had a replacement heart valve put in. I have a $35 Mi band that periodically reads my heart rate and other data. I have guidelines from my doctor that tell me to keep my heart rate within 20-30 beats-per-minute of my resting rate during exercise and I have an exercise program from the physical therapist on how much I can do during each week of my recovery. Maybe tinyML could combine the static data doc/pt guidelines with the 'edge' data of heart rate, distance, speed, hills, breathing rate, recovery from exercise rate, weight to produce a gentle reminder that maybe I could speed up a bit or slow down or take a rest or get off my butt in some way that optimizes my recovery.

A. Which one of these cases do you find most concerning? Which concerns you the least? B. What do you consider to be the relevant ethical challenges? C. What do you think the designers of this technology could have done differently? D. How can you apply learnings from these examples to your own job? Your personal life? E. Do you agree or disagree with what others have already posted in the forum? Why?


surveillance capitalism is pervasive

The Nest fiasco is one example. When we had guests over and we were chatting in the living room and Alexa said "I am sorry i didn't quite hear that" is another. But the damage to civil society by the algorithms of Facebook, YouTube and Twitter are incalculable, hidden behind the wilful obfuscation and disregard for privacy law by big tech. It is a major flaw of the internet that we depend on big tech to authenticate us giving them unbridled access to our data and its use by AI to mold the very way we think. We need to authenticate ourselves and keep control and ownership of our own data. Tim Berners Lee would agree. If our esteemed teacher uses OK Google as an example for the 20th time without touching on the implications I think my head will explode. Why do we have to worship at the alter of big tech. IOT is their next frontier. We need to watch out.