Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mrdbourke authored Dec 23, 2023
1 parent 7f32534 commit 2d8e6cb
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -451,7 +451,7 @@ Notes:

### Geekbench ML

All tests done using [Geekbench ML 0.6.0](https://www.geekbench.com/ml/) for Mac.
All tests were done using [Geekbench ML 0.6.0](https://www.geekbench.com/ml/) for Mac.

Tests include a series of [inference-only benchmarks](https://www.geekbench.com/doc/ml-0.6-inference-workloads.pdf) across different domains.

Expand Down Expand Up @@ -524,4 +524,4 @@ In summary my recommendations are:
* Add total memory count + num GPU cores to results e.g. "Apple_M1_Pro_18GB_Memory_14_GPU_Cores..."
* Add scikit-learn/XGBoost tests, e.g. 100,000 rows, 1,000,000 rows?
* Could I use Keras 3.0 for the same code to run on multiple backends? :thinking:
* Apple has recently released a deep learning framework called [`MLX`](https://github.com/ml-explore/mlx) which is designed for Apple Silicon, this may significantly improve speed on Apple Silicon Macs, see the `mlx/` directory for more. See this example of Llama 2 running on MLX - https://huggingface.co/mlx-llama/Llama-2-7b-chat-mlx
* Apple has recently released a deep learning framework called [`MLX`](https://github.com/ml-explore/mlx) which is designed for Apple Silicon, this may significantly improve speed on Apple Silicon Macs, see the `mlx/` directory for more. See this example of Llama 2 running on MLX - https://huggingface.co/mlx-llama/Llama-2-7b-chat-mlx

0 comments on commit 2d8e6cb

Please sign in to comment.