d1b85f5...
by
kunal-vaishnavi <email address hidden>
Reduce LLaMA memory usage (#18181)
### Description
This PR reduces the memory usage when exporting and benchmarking LLaMA.
### Motivation and Context
- Exporting: The PyTorch model is deleted from memory after a successful
export instead of deleting it from memory after exporting + converting
the ONNX model to the desired precision.
- Benchmarking: In the ONNX model with GroupQueryAttention, the KV cache
inputs use the same GPU memory for both the prompt and token generation
benchmarks.
2b95e74...
by
RandySheriffH <email address hidden>
Versioning for custom op (#18088)
Allow custom ops to have versions.
---------
Co-authored-by: Randy Shuai <email address hidden>
Add mobile CIs to list run by script for external PRs. (#18094)
### Description
<!-- Describe your changes. -->
Add the mobile CIs to the list so we check external PRs don't break
those.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Recent external PR was found to break iOS CI after checkin
The `RemoveDuplicateCastTransformer` fairly naively removed Cast nodes
from the graph without considering precision loss when using the same
`TypeGroup`. For instance, F64 -> F32 -> F64 would be optimised out of
the graph.
I also noticed that signedness was not accounted for, which is not
covered by any existing issue but is a problem. For example doing int ->
unsigned int -> int produces very different values for negative inputs
and so should not be optimised out
One could argue that we shouldn't be performing such cast elimination at
all (at least not in this transformer). The original scope might be well
restricted to only eliminating unnecessary casts from the
`InsertCastTransformer` and no others.