Started writing the project report and created a google sheet as a data repository.
Using OpenRouteService with guidance from Apertus. The API connections are working and we are making progress. One of us is being slightly distracted by 3D robot vision 🤖
The team is on its way with a solution approach to mapping similarity between types of content. There are useful convolution networks, and we could generate embeddings on the basis of a database. Then we would filter pictures using embeddings closest to a query. Using a vision transformer combined with text embedding, were able to detect the shapes, and we are working on a combined solution.
Screenshot from Celestia by TheLostProbe CC BY 4.0
Here's our Git Repo: https://github.com/longobucco/bern-solar-panel-detection
Team: Luca, Fatma and George
The team has been processing orthophotos and satellite data, to try to detect solar panels. There is more literature available, and we expect clearer results. Manual classification of polygons vs. georeferenced points are what two subgroups are working.
We had three business-interested and two IT experts, trying to clarify the use case. Planning to develop an e-learning module on developing guardrails for a multimodal system. I'd suggest to create a screencast inspired by the TextCortex YouTube channel to explain LLM Guardrails (OpenAI) on a general level. This will help to interest developers in implementing a solution. But I would really also like to see the product in action from a user perspective in this project first.
After a few hours of work we have managed to collect hardware power metrics of a testbed server in the Begasoft cloud. We need to make another script, because we currently only can read the average energy use. Still struggling with getting Apertus to work on a local NVIDIA machine, kinda wish we had a Mac ;-) j/k
very focusedd - go for it!!!
* dribs n. pl.: in small amounts, a few at a time
