Yes but it's less of an issue than when we typically talk about it.
Overfitting in most ml is a problem because you task an automated process with no understanding with the job of mercilessly optimising for a goal, and then you have to figure out how to spot that.
Here you're actively picking architectures and you should be actively creating new tests to see how your system performs.
You're also dealing with much more general purpose systems, so the chance you're overfitting lowers.
Beyond that you're into the production ML environment where you need to be monitoring how things are going for your actual users.
Overfitting in most ml is a problem because you task an automated process with no understanding with the job of mercilessly optimising for a goal, and then you have to figure out how to spot that.
Here you're actively picking architectures and you should be actively creating new tests to see how your system performs.
You're also dealing with much more general purpose systems, so the chance you're overfitting lowers.
Beyond that you're into the production ML environment where you need to be monitoring how things are going for your actual users.