🗃️ examples
1 items
📄️ compress model
The number of Embedding net networks in the DP model is $N^2$ times the number of atom types $N$. As the number of atom types increases, the number of Embedding net networks will rapidly increase, leading to a larger computational graph for backpropagation, which becomes one of the bottlenecks for inference in the DP model. The time statistics for the inference process of a quinary alloy system in the DP model show that the time spent on Embedding net computation and gradient calculation accounts for over 90% of the total time, indicating significant optimization potential.
📄️ type embedding
Since the Embedding Net of the DP model is $N^2$ times the number of element types $N$, it poses limitations on both the training and inference speed when there are a large number of element types in the system. Additionally, this also restricts the potential of the DP model in terms of scalability to larger models. Considering that the $N^2$ Embedding nets implicitly encode information about the element types, we can achieve similar effects by adjusting $S$ and concatenating the physical property information of the element types with $S{ij}$. This way, we only need a single Embedding Net instead of $N^2$.
📄️ Python inference
We provide two types of Python inference methods. The first one is direct prediction for structures, as shown in the section Predict Structures, use infer command. The second is for predicting a large number of pwmlff/npy, vasp/outcar, pwmat/movement formats or their mixed formats. For this, we offer the option of using a JSON configuration file, as detailed in the section Mixed Data Prediction, use test command.