Skip to content

Commit bfc80a1

Browse files
committed
deploy: 6cda963
1 parent 89c4f75 commit bfc80a1

File tree

3 files changed

+9
-9
lines changed

3 files changed

+9
-9
lines changed

_modules/torch_molecule/generator/molgpt/modeling_molgpt.html

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -305,16 +305,16 @@ <h1>Source code for torch_molecule.generator.molgpt.modeling_molgpt</h1><div cla
305305
<span class="sd"> max_len : int, default=128</span>
306306
<span class="sd"> Maximum length of SMILES strings.</span>
307307
<span class="sd"> num_task : int, default=0</span>
308-
<span class="sd"> Number of property prediction tasks for conditional generation.</span>
308+
<span class="sd"> Number of property prediction tasks for conditional generation. O for unconditional generation.</span>
309309
<span class="sd"> use_scaffold : bool, default=False</span>
310310
<span class="sd"> Whether to use scaffold conditioning.</span>
311311
<span class="sd"> use_lstm : bool, default=False</span>
312-
<span class="sd"> Whether to use LSTM for encoding.</span>
312+
<span class="sd"> Whether to use LSTM for encoding scaffold.</span>
313313
<span class="sd"> lstm_layers : int, default=0</span>
314314
<span class="sd"> Number of LSTM layers if use_lstm is True.</span>
315315
<span class="sd"> batch_size : int, default=64</span>
316316
<span class="sd"> Batch size for training.</span>
317-
<span class="sd"> epochs : int, default=10</span>
317+
<span class="sd"> epochs : int, default=1000</span>
318318
<span class="sd"> Number of training epochs.</span>
319319
<span class="sd"> learning_rate : float, default=3e-4</span>
320320
<span class="sd"> Learning rate for optimizer.</span>
@@ -342,7 +342,7 @@ <h1>Source code for torch_molecule.generator.molgpt.modeling_molgpt</h1><div cla
342342

343343
<span class="c1"># Training parameters</span>
344344
<span class="n">batch_size</span><span class="p">:</span> <span class="nb">int</span> <span class="o">=</span> <span class="mi">64</span>
345-
<span class="n">epochs</span><span class="p">:</span> <span class="nb">int</span> <span class="o">=</span> <span class="mi">10</span>
345+
<span class="n">epochs</span><span class="p">:</span> <span class="nb">int</span> <span class="o">=</span> <span class="mi">1000</span>
346346
<span class="n">learning_rate</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">3e-4</span>
347347
<span class="n">adamw_betas</span><span class="p">:</span> <span class="n">Tuple</span><span class="p">[</span><span class="nb">float</span><span class="p">,</span> <span class="nb">float</span><span class="p">]</span> <span class="o">=</span> <span class="p">(</span><span class="mf">0.9</span><span class="p">,</span> <span class="mf">0.95</span><span class="p">)</span>
348348
<span class="n">weight_decay</span><span class="p">:</span> <span class="nb">float</span> <span class="o">=</span> <span class="mf">0.1</span>

0 commit comments

Comments
 (0)