Skip to content

Thoughts from Mackenzie Ocana #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 24 additions & 11 deletions markdown/Lake_trout_quarto.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -161,22 +161,35 @@ expand.grid(M = seq(.15, .25, .1), B_msy = seq(4, 20, 4)) %>%

### How to update

I can envision several ways to update/improve/localize this model, and I'm not sure which is most appropriate.

- In instances where we have abundant data it might be appropriate to estimate our own equation to replace the equation Lester uses. Without knowing much about our data availability it seem likely this approach could work for asymptotic length and (possibly) weight.

- If we could obtain variance estimates form the authors we may be able to use Bayesian methods to update their parameter estimates using our data.

- If we know some vales it might make sense to just plug in our known values as opposed to using the Lester equation to estimate the same thing. The main parameter where this might make some sense is thermocline depth which is used to estimate hypolinnion volume thus affecting downstream estimates of epilimnion volume, habitat suitability and biomass density.
There are two ways to use Lester's model: (1) we have enough of our own data to fit the model and update parameters, or (2) we use the process Lester, Shuter, Jones, and Sandstrom took to create their model (e.g., look for relationships, create equations, estimate MSY, etc.) to create our own model for our Alaskan lake trout lakes. Note that both ways involve creating our own model, inspired by Lester et al.'s work.

1. We have enough data from our Alaskan Lake trout lakes to fit the model and update parameters.
1. First, we look at the relationships between length infinity, weight infinity, thermocline depth, hypolimnetic volume, epibenthic volume, etc. to see if the relationships are the same for Alaskan lakes/lake trout.
1. If the relationships are the same, we use the Alaskan data to update parameter values.
2. If the relationships aren't the same, we update the equations suggested by Lester to reflect the relationships we see in our data. We may have to add in effects on lake trout survival from winter ice thickness, snow melt, etc. We then update parameter values in our new equations using Alaskan data.
2. We calculate sustained yield for each of our lakes, following Lester's suggestions.
3. We use our version of the Lester model to calculate MSY, and then compare it to sustained yield for our lakes.
1. Do our calculations for biomass and MSY make sense? Are there factors influencing lake trout survival that aren't in the model? How can we be sure that the resulting MSY is actually maximum sustained yield for our fish?
2. Regardless, use the MSY for individual lakes with extreme caution. If we want to know how a lake trout population is doing, then we need abundance and harvest data on that population.
3. Since data on multiple lakes were used in the model, our population of inference is all the Alaskan lakes in our dataset. Thus, the MSY calculated from our own model using Alaskan data is a description of all our Alaskan lakes and not just one. Lake information plugged into the model gives average MSY for Alaskan lakes with the same information as was plugged in. Notice that the MSY applies to lakes of a certain size, depth, etc. in general and may not be true for a specific lake of the same size, depth, etc.
2. We don't have much data on Alaskan lakes, and may never have enough. We use the process Lester et al. took to create their model.
1. We determine average annual sustainable exploitation rates for the lakes where the abundance hasn't changed and we have sufficient abundance estimates and SWHS estimates.
2. We explore relationships between these exploitation rates and the other data we have on length, weight, lake depth, etc.
3. We ask ourselves: Are these relationships strong enough to tell us anything about our lakes and lake trout populations?
1. If so, create equations based on the relationships we see to create our own model. Update parameter values in the equations based on our Alaskan data. Then use our model to calculate sustainable exploitation rates for our lakes.
1. Do those sustainable exploitation rates make sense based on what we do know about those lakes?
2. Regardless, use the sustainable exploitation rates for an individual lake calculated in this way with extreme caution. If we want to know how a lake trout population is doing, then we need abundance and harvest data on that population.
3. Since data on multiple lakes were used in the model, our population of inference is all the Alaskan lakes in our dataset. Thus, the sustainable exploitation rates calculated from our own model using Alaskan data is a description of all our Alaskan lakes and not just one. Lake information plugged into the model gives average sustainable exploitation rates for Alaskan lakes with the same information as was plugged in. Notice that the sustainable exploitation rate applies to lakes of a certain size, depth, etc. in general and may not be true for a specific lake of the same size, depth, etc.
2. If not, we need to collect more data within our budget to update the model, or be content calculating sustainable exploitation rates for an individual lake using abundance and SWHS estimates for said lake and then using those exploitation rates as a management tool for that specific lake.

### How to ground truth

The other issue I think we need to consider is how to verify the model is making accurate management recommendations for our lakes. You could think of the approach outlined above as using the data we have available to "train" the model to fit our lakes. An approach which is conceptually separate from that outline above would be to hold our data out of the model and use it to "test" the model for our situation. While this could be appropriate for any parameter it is likely our only option to gain confidence in the model wrt yield (i.e. use SWHS records for these stocks, or any direct estimate of biomass when avaiable, to ground truth the estimate of MSY for the lake where we have harvest information).
The other issue I think we need to consider is how to verify the model is making accurate management recommendations for our lakes. Option 1 under the "How to update" section is more tricky and requires a lot of data and validation. We have to make sure we are seeing strong relationships in our data, which then create the equations that make up the model. Option 2 allows us to use the data we have but can be equally tricky. We still have to make sure we are seeing strong relationships in our data, which then create the equations that make up the model.

### How to not abuse the Lester model

I think it's important to note that Lester et al. intended this model be be a regional scale diagnostic because they felt the MSY estimates were to variable to be of much use. The model is mostly heuristic in that no estimates of variability are produced and we really cant say much about the quality of it's estimates. I hear staff talking about wanting to use these estimates of MSY to modify regulations on individual lakes. I'd urge cation here and think hard about how to make decisions for groups of similar lakes. The Lester paper gives some instruction on how they thought this should be done.
I think it's important to note that Lester et al. intended this model be be a regional scale diagnostic, which is indicated by the title of Lester et al.'s paper (i.e., "A General, Life History Based Model for Sustainable Exploitation of Lake Charr across their Range"). Notice that the model is being developed with the goal of finding sustainable exploitation for Lake Charr across their range, although the authors recognize how landscape variation can affect Lake Charr population dynamics. They also felt further validation of the biomass sub-model for colder northern regions was critical, because much of the data to inform this model came from lakes in the southern portion of the species' range. They felt the MSY estimates were too variable to be of much use as well. The model is mostly heuristic in that no estimates of variability are produced and we really can't say much about the quality of its estimates. I hear staff talking about wanting to use these estimates of MSY to modify regulations on individual lakes. I'd urge caution here and think hard about how to make decisions for groups of similar lakes. The Lester paper gives some instruction on how they thought this should be done.

When I look at Jordy's comparison between the lake area and Lester models I see Crosswind and Louise as the largest difference on the kilogram scale but see shallow Tangle and Little Sevenmile as the large proportional changes from the LAM estimates. Do we have the management resolution to respond to the changes we are seeing for the smaller MSY lakes? Is there a pattern in the model differences wrt population viability, accessibility or fishing pressure that we can manage to.
When I look at Jordy's comparison between the lake area and Lester models, I see Crosswind and Louise as the largest difference on the kilogram scale but see Shallow Tangle and Little Sevenmile as the largest proportional changes from the LAM estimates. Do we have the management resolution to respond to the changes we are seeing for the smaller MSY lakes? Is there a pattern in the model differences wrt population viability, accessibility, or fishing pressure that we can manage to?

The other thing I noticed is that Jordy was considering different scaling factors for harvest targets as a percent of MSY. If we are assuming the yield potential from the LAM is half of MSY and also trying to make sure our harvest stays below LAM yield potential then is seems like scaling factors above 0.5 represent liberalizations that we are making by choice, rather than as a result of this new assessment.
The other thing I noticed is that Jordy was considering different scaling factors for harvest targets as a percent of MSY. If we are assuming the yield potential from the LAM is half of MSY and also trying to make sure our harvest stays below LAM yield potential, then scaling factors above 0.5 represent liberalizations that we are making by choice, rather than as a result of this new assessment.