Skills: Machine Learning, Research Engineering, Experiment Design, Deep Learning, Computer Vision, NLP, CV, Cloud Computing, Demand Forecasting, Propensity Modeling, MLOps.
Bio:
Hi! I am a software engineer who has worked in ML engineering and data science for the past five years. I have previously worked on pricing, recommendation systems, CV & NLP classifiers, propensity models, experiment design, and A/B tests; setting up training environments in research settings and translating research code into production pipelines. I've also been involved in setting up MLOps infra - CI/CD, data preprocessing, retraining, and inference pipelines, logging, and model monitoring.
Am open to fully remote positions, either in the US or UK/Europe time zones. Should I work on US time zones, I plan to relocate to align with Eastern time.
This app is very cool! In walkie talkie mode, you may want to tweak the UI to make it clearer when the microphone is listening and when it's not. I saw the microphone icon changes color, but a stronger visual hue may help. Google translate may be a good point of reference - in its conversation mode the shape of the microphone icon changes when the mic is active. I've also noticed that my message is sometimes cut short when translated, it only translates the first half.
Thank you for the feedback! In a little while I plan to redo the entire graphic interface, and yes a method to make it clearer when the microphone is listening is already in the plans. As for the cutting of sentences you can probably solve it by increasing the microphone's sensitivity from the app settings (from there you can also change other settings regarding the activation of the microphone, but most likely it is a sensitivity problem).
AGI safety from first principles [1] is a good write-up.
You can read more about instrumental convergence, reward misspecification, goal mis-generalization and inner misalignment, which are some specific problems AI Safety people care about, by glossing through the curricula of the AI Alignment Course [2], which provides pointers to several relevant blogposts and papers about these topics.
Is there a clear argument that I can read without spending more than 15 minutes of my time reading the argument? If such an argument exists somewhere, can you point to it?
Also note we were talking about modern day LLM AIs here, and their descendants. We were not talking about science fiction AGIs. Unless of course you have an argument as to how one of these LLMs somehow descends into an AGI.
AGI safety from first principles [1] is a good write-up.
You can read more about instrumental convergence, reward misspecification, goal mis-generalization and inner misalignment, which are some specific problems AI Safety people care about, by glossing through the curricula of the AI Alignment Course [2], which provides pointers to several relevant blogposts and papers about these topics.
Location: London, UK
Remote: Yes
Willing to relocate: No
Résumé/CV: https://drive.google.com/file/d/1RKujIZVQiBm1zPw8sUvOK7HFE89...
Email: ssosarippe at gmail dot com
LinkedIn: https://www.linkedin.com/in/-sebastian-sosa/
Technologies: Python, PyTorch, Ray, sklearn, Pandas, SQL, Docker, GCP, FastAPI
Skills: Machine Learning, Research Engineering, Experiment Design, Deep Learning, Computer Vision, NLP, CV, Cloud Computing, Demand Forecasting, Propensity Modeling, MLOps.
Bio: Hi! I am a software engineer who has worked in ML engineering and data science for the past five years. I have previously worked on pricing, recommendation systems, CV & NLP classifiers, propensity models, experiment design, and A/B tests; setting up training environments in research settings and translating research code into production pipelines. I've also been involved in setting up MLOps infra - CI/CD, data preprocessing, retraining, and inference pipelines, logging, and model monitoring.
Am open to fully remote positions, either in the US or UK/Europe time zones. Should I work on US time zones, I plan to relocate to align with Eastern time.