You can also abstract the GUI and only let given import sources sync the required data, e.g. the typical Excel list of hosts with some custom variables for further configuration (service sets/applies, notifications, etc.). This works very well in customer setups to my knowledge.
Config Management: Puppet, etc.
On the other hand, if you’re going the config management route and having Puppet or Ansible already, it totally makes sense to directly manage the monitoring configuration during the deployment cycle of a new host, and leave away the Director.
If you need both, a user interface and automated imports, the Director also has an import source for PuppetDB.
Icinga 2 DSL
Within the Icinga 2 DSL on its own, you can heavily use programming language techniques to achieve certain tasks. This can become complicated, especially with knowledge transfer or lacking documentation. If you’re not a programmer, keep it simple stupid imho.
The Director needs the best strategy for deployments, should you use service sets, what about those templates being required all the time, why the need for data fields as command parameters, and so on. Once you master this, and have understood the agents and cluster parts, it is a breeze to work with. Next to the documentation, there’s trainings and workshops available to improve here.
If you can automate the rollout of new hosts and subscribe to these events or do that in a defined interval, use that power and also automate your monitoring - with or without the Director as interface to Icinga 2 Core.
If maintenance of Puppet or the Director adds a burden, and you’re feeling safe with the static configuration, go for it. Only deploying host objects with a script and very good fine granular static apply rules defined just once also work for common middle sized environments. The larger the setup gets, and the more people required to work with it, the more I recommend using the Director and CM.