Python
fromThe JetBrains Blog
2 days agoPyTorch vs. TensorFlow: Choosing the Right Framework in 2026 | The PyCharm Blog
Choosing between PyTorch and TensorFlow depends on project needs, as each excels in different areas of AI development.
Hugging Face has announced the first release candidate of Transformers v5. This marks an important step for the Transformers library, which has evolved significantly since the v4 release five years ago. It has transitioned from a specialized model toolkit to a key resource in AI development, currently recording over three million installations per day, with a total of more than 1.2 billion installs.
Soumith Chintala, a former AI leader at Meta and one of the most influential figures in modern AI infrastructure, has joined former OpenAI CTO Mira Murati's startup, Thinking Machines Lab. '[T]hinking machines...the people are incredible,' Chintala posted to his X account on Tuesday. He also updated his X bio and LinkedIn profile to indicate that he is now working at the startup, marking his first move since leaving Meta earlier this month.
In this lesson, you will learn how to convert a pre-trained ResNetV2-50 model using PyTorch Image Models (TIMM) to ONNX, analyze its structure, and test inference using ONNX Runtime. We'll also compare inference speed and model size against standard PyTorch execution to highlight why ONNX is better suited for lightweight AI inference. This prepares the model for integration with FastAPI and Docker, ensuring environment consistency before deploying to AWS Lambda.
Meta's PyTorch team has unveiled Monarch, an open-source framework designed to simplify distributed AI workflows across multiple GPUs and machines. The system introduces a single-controller model that allows one script to coordinate computation across an entire cluster, reducing the complexity of large-scale training and reinforcement learning tasks without changing how developers write standard PyTorch code. Monarch replaces the traditional multi-controller approach, in which multiple copies of the same script run independently across machines, with a single-controller model.
The PyTorch team at Meta, stewards of the PyTorch open source machine learning framework, has unveiled Monarch, a distributed programming framework intended to bring the simplicity of PyTorch to entire clusters. Monarch pairs a Python-based front end, supporting integration with existing code and libraries such as PyTorch, and a Rust-based back end, which facilitates performance, scalability, and robustness, the team said. .
For Developers: * Never use pickle for untrusted data: This cannot be emphasized enough. * Never assume checkpoint files are safe: Checkpoint deserialization is vulnerable to supply chain attacks. * Always use weights_only=True when using PyTorch's load functions. * Restrict to trusted classes: Restrict deserialization to only trusted classes. * Implement defense in depth: Don't rely on a single security measure. * Consider alternative formats: Safetensors, ONNX, or other secure serialization formats should all be considered.