Any way to borrow compute from Apple M1
0
admin
To borrow compute from an Apple M1 Max for tasks like running LLaMA 3 or Stable Diffusion, you can set up a local HTTP service on your friend’s laptop using Ollama. Use Tailscale or ZeroTier for remote access. This allows API calls without remote desktop access, leveraging the GPU when idle while managing memory effectively.
0 Subscribers
Submit Answer
0 Answers