Skip to content

Commit 8c86815

Browse files
authored
Merge branch 'main' into hotfix_vasp_kpath
2 parents d485dce + 2cf6de4 commit 8c86815

File tree

174 files changed

+165195
-224
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

174 files changed

+165195
-224
lines changed

.github/workflows/add_pages_doc.yml

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
name: Build and Deploy Docs
2+
3+
on:
4+
push:
5+
branches: [ main ] # 或者您的默认分支名
6+
pull_request:
7+
branches: [ main ]
8+
9+
jobs:
10+
build:
11+
if: github.repository == 'deepmodeling/DeePTB'
12+
runs-on: ubuntu-22.04
13+
steps:
14+
- uses: actions/checkout@v4
15+
- name: Set up Python
16+
uses: actions/setup-python@v5
17+
with:
18+
python-version: '3.11'
19+
- name: Install dependencies
20+
run: |
21+
python -m pip install --upgrade pip
22+
pip install sphinx sphinx_book_theme linkify-it-py
23+
pip install myst-nb jupyter
24+
# 安装deepmodeling_sphinx
25+
if [ -f docs/requirements.txt ]; then pip install -r docs/requirements.txt; fi
26+
- name: Build docs
27+
run: |
28+
cd docs
29+
sphinx-build -b html . _build/html
30+
- name: Deploy
31+
uses: peaceiris/actions-gh-pages@v4
32+
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
33+
with:
34+
github_token: ${{ secrets.GITHUB_TOKEN }}
35+
publish_dir: ./docs/_build/html

README.md

Lines changed: 14 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -66,42 +66,40 @@ Installing **DeePTB** is straightforward. We recommend using a virtual environme
6666

6767
Highly recommended to install DeePTB from source to get the latest features and bug fixes.
6868
1. **Setup Python environment**:
69-
7069
Using conda (recommended, python >=3.9, <=3.12 ), e.g.,
7170
```bash
7271
conda create -n dptb_venv python=3.10
7372
conda activate dptb_venv
7473
```
7574
or using venv (make sure python >=3.9,<=3.12)
75+
7676
```bash
7777
python -m venv dptb_venv
7878
source dptb_venv/bin/activate
79+
```
7980

8081
2. **Clone DeePTB and Navigate to the root directory**:
8182
```bash
8283
git clone https://github.com/deepmodeling/DeePTB.git
8384
cd DeePTB
8485
```
85-
3. **Install `torch` and `torch-scatter`** (two ways):
86-
- **Recommended**: Install torch and torch-scatter using the following commands:
8786

87+
3. **Install `torch`**:
88+
```bash
89+
pip install "torch>=2.0.0,<=2.5.0"
90+
```
91+
4. **Install `torch-scatter`** (two ways):
92+
- **Recommended**: Install torch and torch-scatter using the following commands:
8893
```bash
8994
python docs/auto_install_torch_scatter.py
9095
```
91-
9296
- **Manual**: Install torch and torch-scatter manually:
93-
1. install torch:
94-
```bash
95-
pip install "torch>=2.0.0,<=2.5.0"
96-
```
97-
98-
2. install torch-scatter:
99-
```bash
100-
pip install torch-scatter -f https://data.pyg.org/whl/torch-${version}+${CUDA}.html
101-
```
102-
where `${version}` is the version of torch, e.g., 2.5.0, and `${CUDA}` is the CUDA version, e.g., cpu, cu118, cu121, cu124. See [torch_scatter doc](https://github.com/rusty1s/pytorch_scatter) for more details.
103-
104-
4. **Install DeePTB**:
97+
```bash
98+
pip install torch-scatter -f https://data.pyg.org/whl/torch-${version}+${CUDA}.html
99+
```
100+
where `${version}` is the version of torch, e.g., 2.5.0, and `${CUDA}` is the CUDA version, e.g., cpu, cu118, cu121, cu124. See [torch_scatter doc](https://github.com/rusty1s/pytorch_scatter) for more details.
101+
102+
5. **Install DeePTB**:
105103
```bash
106104
pip install .
107105
```

docs/advanced/e3tb/advanced_input.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,9 @@ In [7]: idp.get_irreps_ess()
8686
Out[7]: 7x0e+6x1o+6x2e+2x3o+1x4e
8787
```
8888

89+
Rules for the irreps setting:
90+
First, we should check the largest angular momentum defined in the DFT LCAO basis, and then double it as our highest order of irreps (since the addition rule of the angular momentum). For example, for `1s1p` basis, the irreps should contain features with angular momentum from 0 to 2, which is 2 times 1, the angular momentum of `p` orbital. If the basis contains `d` orbital, then the irreps should contain angular momentum up to 4. `f` and `g` or even higher orbitals are also supported.
91+
8992
`n_layers`: indicates the number of layers of the networks.
9093

9194
`env_embed_multiplicity`: decide the irreps number when initializing the edge and node features.

docs/conf.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,11 @@
3131
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
3232
# ones.
3333
extensions = [
34-
'myst_parser',
34+
'myst_nb',
3535
'deepmodeling_sphinx',
3636
]
37+
nb_execute_notebooks = "off" # 不执行notebooks
38+
3739
myst_enable_extensions = [
3840
"amsmath",
3941
"colon_fence",

docs/quick_start/easy_install.md

Lines changed: 8 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -29,32 +29,28 @@ Highly recommended to install DeePTB from source to get the latest features and
2929
```bash
3030
python -m venv dptb_venv
3131
source dptb_venv/bin/activate
32-
32+
```
3333
2. **Clone DeePTB and Navigate to the root directory**:
3434
```bash
3535
git clone https://github.com/deepmodeling/DeePTB.git
3636
cd DeePTB
3737
```
38-
3. **Install `torch` and `torch-scatter`** (two ways):
39-
- **Recommended**: Install torch and torch-scatter using the following commands:
40-
38+
3. **Install `torch`**:
4139
```bash
42-
python docs/auto_install_torch_scatter.py
40+
pip install "torch>=2.0.0,<=2.5.0"
4341
```
44-
45-
- **Manual**: Install torch and torch-scatter manually:
46-
1. install torch:
42+
4. **Install `torch-scatter`** (two ways):
43+
- **Recommended**: Install torch and torch-scatter using the following commands:
4744
```bash
48-
pip install "torch>=2.0.0,<=2.5.0"
45+
python docs/auto_install_torch_scatter.py
4946
```
50-
51-
2. install torch-scatter:
47+
- **Manual**: Install torch and torch-scatter manually:
5248
```bash
5349
pip install torch-scatter -f https://data.pyg.org/whl/torch-${version}+${CUDA}.html
5450
```
5551
where `${version}` is the version of torch, e.g., 2.5.0, and `${CUDA}` is the CUDA version, e.g., cpu, cu118, cu121, cu124. See [torch_scatter doc](https://github.com/rusty1s/pytorch_scatter) for more details.
5652

57-
4. **Install DeePTB**:
53+
5. **Install DeePTB**:
5854
```bash
5955
pip install .
6056
```

docs/quick_start/hands_on/e3tb_hands_on.md

Lines changed: 20 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
DeePTB supports training an E3-equalvariant model to predict DFT Hamiltonian, density and overlap matrix under LCAO basis. Here, cubic-phase bulk silicon has been chosen as a quick start example.
44

5-
Silicon is a chemical element; it has the symbol Si and atomic number 14. It is a hard, brittle crystalline solid with a blue-grey metallic lustre, and is a tetravalent metalloid and semiconductor. The prepared files are located in:
5+
Silicon is a chemical element; it has the symbol Si and atomic number 14. It is a hard, brittle crystalline solid with a blue-grey metallic lustre, and is a tetravalent metalloid and semiconductor (Shut up). The prepared files are located in:
66

77
```
88
deeptb/examples/e3/
@@ -19,16 +19,13 @@ deeptb/examples/e3/
1919
| `-- info.json
2020
`-- input.json
2121
```
22-
We prepared one frame of silicon cubic bulk structure as an example. The data was computed using DFT software ABACUS, with an LCAO basis set containing 1 `s` and 1 `p` orbital. The cutoff radius for the orbital is 7au, which means the largest bond would be less than 14 au. Therefore, the r_max should be set as 7.4. So we have an info.json file like:
22+
We prepared one frame of silicon cubic bulk structure as an example. The data was computed using DFT software ABACUS, with an LCAO basis set containing 1 `s` and 1 `p` orbital. We now have an info.json file like:
2323

24-
```json
24+
```JSON
2525
{
2626
"nframes": 1,
2727
"pos_type": "cart",
28-
"AtomicData_options": {
29-
"r_max": 7.4,
30-
"pbc": true
31-
}
28+
"pbc": true, # same as [true, true, true]
3229
}
3330
```
3431

@@ -42,7 +39,7 @@ The `input_short.json` file contains the least number of parameters that are req
4239
"overlap": true
4340
}
4441
```
45-
In `common_options`, here are the essential parameters. The `basis` should align with the DFT calculation, so 1 `s` and 1 `p` orbital would result in a `1s1p` basis. The `device` can either be `cpu` or `cuda`, but we highly recommend using `cuda` if GPU is available. The `overlap` tag controls whether to fit the overlap matrix together. Benefitting from our parameterization, the fitting overlap only brings negelectable costs, but would boost the convenience when using the model.
42+
In `common_options`, here are the essential parameters. The `basis` should align with the DFT calculation, so 1 `s` and 1 `p` orbital would result in a `1s1p` basis. The cutoff radius for the orbital is 7au, which means the largest bond would be less than 14 au. Therefore, the `r_max`, which equals to the maximum bond length, should be set as 7.4. The `device` can either be `cpu` or `cuda`, but we highly recommend using `cuda` if GPU is available. The `overlap` tag controls whether to fit the overlap matrix together. Benefitting from our parameterization, the fitting overlap only brings negligible costs, but is very convenient when using the model.
4643

4744
Here comes the `model_options`:
4845
```json
@@ -67,16 +64,26 @@ The `model_options` contains `embedding` and `prediction` parts, denoting the co
6764

6865
In `embedding`, the `method` supports `slem` and `lem` for now, where `slem` has a strictly localized dependency, which has better transferability and data efficiency, while `lem` has an adjustable semi-local dependency, which has better representation capacity, but would require a little more data. `r_max` should align with the one defined in `info.json`.
6966

70-
For `irreps_hidden`, this parameter defines the size of the hidden equivariant irreducible representation, which is highly related to the power of the model. There are certain rules to define this param. First, we should check the largest angular momentum defined in the DFT LCAO basis, the irreps's highest angular momentum should always be double. For example, for `1s1p` basis, the irreps should contain features with angular momentum from 0 to 2, which is 2 times 1, the angular momentum of `p` orbital. If the basis contains `d` orbital, then the irreps should contain angular momentum up to 4. `f` and `g` or even higher orbitals are also supported.
67+
For `irreps_hidden`, this parameter defines the size of the hidden equivariant irreducible representation, which decides most of the power of the model. There are certain rules to define this param. But for quick usage, we provide a tool to do basis analysis to extract essential irreps.
68+
69+
```IPYTHON
70+
In [1]: from dptb.data import OrbitalMapper
71+
72+
In [2]: idp = OrbitalMapper(basis={"Si": "1s1p"})
73+
74+
In [3]: idp.get_irreps_ess()
75+
Out[3]: 2x0e+1x1o+1x2e
76+
```
7177

72-
In `prediction`, we should use the `e3tb` method to let the model know the output features are arranged in **DeePTB-E3** format. The neurons are defined for a simple MLP to predict the slater-koster-like parameters for predicting the overlap matrix, for which [64,64] is usually fine.
78+
This is the number of independent irreps contains in the basis. Irreps configured should be multiple times of this essential irreps. The number can varies with a pretty large freedom, but the all the types, for example ("0e", "1o", "2e") here, should be included for all. We usually take a descending order starts from "32", "64", or "128" for the first "0e" and decay by half for latter high order irreps. For general rules of the irreps, user can read the advance topics in the doc, but for now, you are safe to ignore!
7379

80+
In `prediction`, we should use the `e3tb` method to require the model output features using **DeePTB-E3** format. The neurons are defined for a simple MLP to predict the slater-koster-like parameters for predicting the overlap matrix, for which [64,64] is usually fine.
7481

7582
Now everything is prepared! We can using the following command and we can train the first model:
7683

7784
```bash
7885
cd deeptb/examples/e3
79-
dptb train ./input/input_short.json -o ./e3_silicon
86+
dptb train ./input_short.json -o ./e3_silicon
8087
```
8188

8289
Here ``-o`` indicate the output directory. During the fitting procedure, we can see the loss curve of hBN is decrease consistently. When finished, we get the fitting results in folders ```e3_silicon```.
@@ -87,9 +94,9 @@ python plot_band.py
8794
```
8895
or just using the command line
8996
```bash
90-
dptb run ./run/band.json -i ./e3_silicon/checkpoint/nnenv.best.pth -o ./band_plot
97+
dptb run ./band.json -i ./e3_silicon/checkpoint/nnenv.best.pth -o ./band_plot
9198
```
9299

93100
![band_e3_Si](https://raw.githubusercontent.com/deepmodeling/DeePTB/main/docs/img/silicon_e3_band.png)
94101

95-
Now you know how to train a **DeePTB-E3** model for Hamiltonian and overlap matrix. For better usage, we encourage the user to read the full input parameters for the **DeePTB-E3** model. Also, the **DeePTB** model supports several post-process tools, and the user can directly extract any predicted properties just using a few lines of code. Please see the basis_api for details.
102+
Now you know how to train a **DeePTB-E3** model for Hamiltonian and overlap matrix. For better usage, we encourage the user to read the full input parameters for the **DeePTB-E3** model. Also, the **DeePTB** model supports several post-process tools, and the user can directly extract any predicted properties just using a few lines of code. Please see the basis_api for details.

docs/quick_start/hands_on/index.rst

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,7 @@ A quick Example
33
=================================================
44

55
.. toctree::
6-
sktb_hands_on
7-
e3tb_hands_on
6+
tutorial1_base_sk
7+
tutorial2_data
8+
tutorial3_train_si
9+
tutorial4_E3_si

0 commit comments

Comments
 (0)