Table of Contents


Leadership Principles: Guiding the Way to Success

Prequal:

I've been searching for the right topic for my next blog post, which would offer relevance and valuable insights. As my role has evolved in recent months, I've gained a deeper understanding of leadership and what it truly means to be a leader. While I've held various leadership positions in the past, it's only now, with the opportunity to reflect, that I've crystallized my beliefs about what constitutes strong leadership traits. Working at GitHub has exposed me to exceptional technical and people leaders whose collaboration has helped shape my leadership principles. This blog post aims to share these principles and provide my perspective on their significance.

Before we begin, it's essential to acknowledge that leadership takes various forms, and the principles I present may not be universally applicable. My objective is not to impose my views on others but to share what leadership means.

Introduction:

Leadership goes beyond a mere title or position; it embodies a mindset and a set of principles that guide individuals to unlock their potential and that of others. Whether leading a small team or an entire organization, understanding and embodying practical leadership principles can make all the difference in achieving professional and personal success.

In this blog post, I will explore the six leadership principles I hold myself accountable for. My aim with these principles is to cultivate a positive and productive work environment that lays a solid foundation for driving teams towards shared goals.

Lead by Example:

As a leader, actions speak louder than words. Leading by example entails embodying the behaviours and values you expect from others. Whether demonstrating integrity, embracing a strong work ethic, or fostering a culture of continuous learning, your actions set the standard for others to follow.

Communicate Effectively:

Clear, open, and transparent communication forms the backbone of effective leadership. It involves listening, providing feedback, and ensuring everyone is aligned on goals and expectations. Effective communication builds trust, resolves conflicts, and strengthens collaboration within the team. Whether the message is positive or negative, clarity in communication fosters a shared understanding.

Empower and Delegate:

A remarkable leader recognizes the strengths and potential of their team members and empowers them to take ownership and make decisions. Delegating tasks and responsibilities lightens the load and allows your team to grow and develop their skills. Trusting your team and granting autonomy instil confidence and a sense of ownership.

Inspire and Motivate:

A leader manages tasks, but, more importantly, inspires and motivates their team members. Be a source of inspiration by setting a compelling vision, articulating goals, and sharing the bigger picture. Recognize and celebrate achievements, creating a positive and supportive atmosphere where individuals feel motivated to give their best.

Humans, not robots

A great leader acknowledges and appreciates each team member's unique qualities and needs. Treating people as individuals rather than mere cogs in a machine demonstrates empathy, fosters a positive work environment, and builds strong relationships.

Stay relatable

Staying relatable as a leader means remaining open to questions and maintaining credibility in your expertise. While it may not always be necessary, I have personally recognized and valued the importance of this approach, especially when working as an individual contributor within a team.

Conclusion:

I've learned that leadership is an ongoing journey that I have not mastered and may never (will likely never :) ) fully conquer. Each leader holds their own perspective on which behaviours are important to them. By sharing my leadership principles, I hope to spark insightful discussions and encourage fellow leaders to reflect on their guiding principles. Together, we can continue to grow and refine our leadership approaches, driving positive impact and empowering those around us.


Why Advanced Security?

Introduction:

It has been seven months since I first joined GitHub (wow, that time goes quickly), specifically the advanced security team. For the people who have worked with me before, you likely know I'm incredibly passionate about what I do and work on. I strive to work in an environment and within a team that contributes meaningful work that makes a difference. That is one of the reasons I jumped at joining GitHub when I had the chance; I am a massive believer in developer experience and DevSecOps, so why not join the home of developers where I can hopefully contribute to making that difference to developers at a broader level.

Over these past seven months, I have seen first-hand some of the decision making criteria and processes that influence the determination of what security tool a company is going to move forward with. There are no right or wrong approaches to picking a security tool. However, there are vital considerations that every company should keep in mind, especially if you are choosing a tool that will bring change to a developers experience and may impact their productivity.

As a member of the advanced security team, I wanted to note six thoughts on how advanced security can strategically bring value to companies from a slightly different angle than expected.

Diversifying your toolset, centralizing the experience:

Commonly, companies would like to diversify their DevOps toolset. Doing this provides some advantages of "no vendor lock-in" and being able to pick the best-in-class tools. In the world of security, this is so important. Nowadays, it would be best if you likely had security for SCA, SAST, IAC, Containers and possibly DAST. Realistically, you are not going to find one tool that does all of these, and if you do, are they going to provide the depth and accuracy you would expect? So, let's say you use one tool for SCA, one tool for SAST and one tool for IAC. This means a developer has to check three different tools to get the data they need to make good security decisions. Yes, you may put some basic results in CI and maybe back to the GitHub Pull Request. Still, they need to context switch between GitHub and the three security tools to see the details (to determine if a result is a false positive or further information about the vulnerability). For a developer, this is incredibly frustrating and leads to a lack of productivity. Developers live within their IDE and GitHub (these are the two places I commonly live when writing code), and this is going to be where I get the most work done and at my happiest. So, how do you provide the best experience for your developers whilst still using the tools you want to use?

GitHub Code Scanning (a feature of advanced security) is the experience around fixing vulnerabilities such as status checks in pull requests, in-line annotations in pull requests, a description of the vulnerability, data flow, and so many more. The value of code scanning is it's 100% language and tool agnostic. When you upload data, the only requirement is that it must be in SARIF (for people who don't know, SARIF is a structured JSON object). This means, for example, you could use CodeQL for SAST, Twistlock for Container Scanning and Snyk for SCA, run them all as part of your CI process and upload results to Code Scanning. A developer would now see results from all tools and get a consistent experience! No longer needing to context switch between various tools to get the details. Everything a developer would need is now directly in GitHub, even though you aren't using all GitHub tools to get the data.

There are a few other values outside of improving the developer experience and diversified toolsets. One is, let's say in eight months, a new IaC security tool comes out, which you want to start using; you just need to plug it into your CI process and upload the results to Code Scanning, as you would any other tool. To a developer, 1) they wouldn't even really know a new tool has been added, they just see alerts that need reviewing, and 2) the alerts look the same as other alerts they have been fixing for a while now. As they are used to fixing previous alerts, they will be "used" to fixing alerts from this new IaC tool, which will lead to remarkably high adoption rates. This is why it's so important to provide a consistent experience across tools.

Another value is that as the data is now all in GitHub, where the developers live, they will be more likely to look and review the data coming from these tools versus just skimming through it. There is less friction in seeing these results in Code Scanning, which means they are more likely to take results seriously. You are truly making security a first-class citizen to the developer workflow using Code Scanning. This is where you want security to get to, a first-class citizen to the developer workflow where developers unconsciously fix alerts as they write code.

Regaining confidence in the developer community with SAST tooling:

I don't think it's unknown that developers traditionally don't like SAST tools. There are multiple reasons for this. Two main ones are that 1) developers are told about SAST results just before going to production, and 2) the results they get are either in a 23 page PDF or web page with 500+ alerts that THEY are told THEY need to review and fix. For the sake of this section, I will focus on the later; the number of results.

Developers lose trust in SAST tools because they produce so many results, with a large proportion realistically being false positives or quality-focused versus security. This means out of 500 alerts, let's say, only 20 may be worth addressing, meaning a developer has likely wasted 2 hours reviewing 480 non-necessary results. After about five iterations of this over a month, a developer will utterly lose confidence in the tool. Instead of adequately looking through the results, they will skim through results and most likely miss something important they wouldn't have if they had more confidence in the data.

This is where CodeQL comes into play. CodeQL is a semantic code analysis engine that treats your code as data. Vulnerabilities are then modelled as queries and executed against the data (built as a database during a CI run). Due to the fact CodeQL has this "built" version of your code represented as structured data, it allows the queries ran (when well written, of course) to be incredibly accurate and precise with the results it returns. For more details on how CodeQL builds the database, check out this blog post by a colleague, Nick Rolfe: Code scanning and Ruby: turning source code into a queryable database. Although Ruby is in the name, the process is similar for other languages. The above means developers are less likely to see false positives. Therefore when a result (or results) are found, developers will trust the results more and review them appropriately, hopefully leading to meaningful action on the results.

Combine the above with the three query suites you get out of the box with CodeQL per language. It is a legacy approach to run the same SAST scan on high-risk and low-risk applications. They have two completely different risk profiles, so why would a developer want to get notified on alerts that they may only tolerate for the higher risk applications. With CodeQL, you can configure which query suite you want to run on a repository level. This means on a lower to medium-risk application where your acceptance of false positives is close to 0%; you can run the standard security query suite. However, for medium to high-risk applications, where you will have a slightly higher tolerance and acceptance of possible false positives and a broader set of results (like md5 encryption), you can run the security-extended query suite. The accuracy and precision of queries may be slightly lower than the security pack, so expect to see a few more results. There is even a security-and-quality query suite that runs everything in security-extended with some bonus quality queries! This is also advantageous to security personnel as most companies likely have an internal risk rating per application, so it would be easy to map an internal risk rating to a CodeQL query suite.

To conclude, you can write your own CodeQL queries and even your own query packs! Think about the possibilities you can do here. You can take our queries, add some of your own, and maybe create a specific query pack for JavaScript SPA's. Or maybe Python API's? Dial the accuracy, precision and number of results to the number that suits your company.

Helping upskill and embrace educational collaboration around security:

It has already been established that security has shifted to being a developer first process over the past five years. Security may be involved as a consultant or advisor, but developers will likely see the data about vulnerabilities before security ever do. Meaning that security tooling needs to adapt to ensure the data returned is primarily aimed at the developer. This is a cultural shift for security tooling. Traditionally these tools have been focused on the security persona; for good reasons to be fair. Security has, up until now, been a security-first mindset, so data aimed at the security persona made sense. This is no longer a suitable approach in this modern age.

With CodeQL, every query comes with information about the query, but most importantly, to a developer, it comes with a recommendation on how to fix the alert, along with language-specific code examples, good and bad! Code examples are what developers want to see. It's great developers getting information about the vulnerability, but, if there are no recommendations on fixing the alert and no language-specific code examples, you introduce friction in the developers fixing process. A good security tool makes it easy to remediate. The easier it is for a developer to remediate the vulnerability, the more likely they are to do it quickly there and then. That's what any developer and security persona wants, high remediation rates!

There will be use cases (I have been there multiple times) where I read the description of a security alert, read the code examples, and still have no idea how to fix this vulnerability. At this point, traditionally, I would just give up and move on. Security can be seen as such a taboo topic in the developer world. Developers think it looks bad on them if they can't fix a vulnerability they caused, which leads to low remediation rates. This is a cultural change every security (and developer) company needs to try and change. Every tool needs to enable and foster collaboration, so if developers are unaware on how to fix it, they can ask and learn for the next time they see a similar vulnerability.

In Code Scanning, within each alert, a developer can click one button, which will open a GitHub Issue, automatically linking that code scanning alert to that issue. In that issue, a developer can mention a tech lead or another developer, maybe a security advisor/engineer, and open a conversation about what they can do to fix this alert. You may be reading this thinking, how does this one-button help really promote a discussion about learning? It's the fact it's so simple and easy to do. A developer doesn't need to copy and paste a bunch of content into Slack/Teams/Jira. They simply click one button, and it automatically opens that issue with all the required information. It's easy to do. Going back to a previous point, streamlining and reducing friction in the developer process will encourage them to use native features like these.

Quick rollout; and therefore time to value:

An essential part of adding any tool into your DevOps toolchain is the speed you can roll out and quickly get high adoption rates. It isn't worthwhile purchasing a tool that takes months to adopt and get good uptake. You want a tool where the value begins on day one, not day 90.

To enable Secret Scanning, you simply need to click one button at the organization level. This will enable Secret Scanning on every repository within that organization. Have 100 repositories? 10,000 repositories? It's a button click, and secret scanning is enabled. Custom patterns are the same, you simply add your custom pattern to the organization, and it's applied to every repository automatically. We tend to find most companies adopt secret scanning within an hour or two of getting Advanced Security applied.

Dependency Review is automatically enabled! There is nothing for you to do. No button click, no configuration, you just get advanced security turned on, and dependency review is automatically ready for use.

Code Scanning is the one product within advanced security that isn't automatically consumable via a button click or pre-enabled. This is because you have to update your CI pipelines/workflows to upload data into Code Scanning. Now, you may be thinking, "getting this ready for use across 100's or 1000's of repositories is going to take so much time". However, this doesn't have to be the case. Many customers have enabled Code Scanning (CodeQL) across thousands of repositories within days. If you use GitHub Actions, an open-source tool has been built fully dedicated to getting CodeQL enabled and set up across multiple repositories quickly and automatically, called GHAS Enabler. You can even use a GitHub Action after initially enabling CodeQL on current repositories, which ensures any new repositories automatically get CodeQL setup. Don't use GitHub Actions? Not a problem. Use the API's provided by GitHub to enable Code Scanning, then update your Jenkins Pipelines/ADO Pipelines programmatically with the required CodeQL commands (or any other tool you want to upload data from).

Security Overview will automatically start showing data the more repositories activate GitHub Advanced Security. No configuration is needed!

Finally, rolling out a tool is more than just enabling it and getting teams using it. It's great that people may use advanced security, but are they really using it in a way you want? Are developers revoking secrets? Are developers remediating vulnerabilities found by CodeQL? GitHub has provided a whitepaper on rolling out advanced security in a structured way that helps you see the value quickly and efficiently, hopefully ensuring people are using it expectedly.

Let's find and revoke those secrets ...

When people think about application security, two standard responses are SCA and SAST. Both these tools are critical in every good DevSecOps process. However, a new response on the scene is starting to become as important, Secret Scanning! I have seen some of the best DevSecOps processes complemented by CI/CD tooling, complete with automation and standardization. However, one capability usually is missing, a tool that detects secrets. Let's walk through why this is so important. Let's say a developer accidentally pushes a private key and an Azure Cosmo DB credential to a repository, and no one is aware. Another developer on that project (maybe a contractor who is new to the team?) finds these credentials and stores them for later use. Maybe that contractor is then let go quickly? That contractor then has ALL the permissions they need to access data in that database and delete EVERYTHING that Azure Cosmo DB credential has access to. Scary, right? This is just one example but highlights the importance of a tool that finds secrets. You could have the best SAST, DAST, SCA, etc., but secrets can simply bypass all of these.

GitHub Secret Scanning can detect not just new secrets, but secrets leaked throughout the entire git history of a repository. GitHub only tries to adopt high confidence patterns in its secret scanning service to ensure low false-positive rates, leading to higher confidence in use from developers. Meaning when secrets are found, developers actually action them. We don't want secret scanning to become a problem similar to SAST, where too many false positives lead to low confidence, which means no one uses it and secrets aren't revoked. There is no point in having a tool that finds secrets, but no one does anything about it. Remediation rates are better than the number of alerts found! However, if we don't have a pattern for a secret you would like to find, not a problem, you can simply create a custom pattern for that secret type, and our secret scanning engine will find any values that match that pattern.

Strategically finding the secrets may not be enough. It's great that a tool finds the secrets, but is that adequate? It still requires peoples time to revoke and remediate these secrets, which may take some serious time and effort. That's why within GitHub Secret Scanning, whenever a secret is found, a webhook can be fired and ingested by you. This opens up endless possibilities around automatically revoking certain secrets and even custom update scripts. These webhooks allow you to react to secrets being detected within seconds! No more manual work.

Including security in the developer workflow and embracing data-driven conversations:

The final aspect to discuss is likely one of the most important. When security personnel hear about "shifting left" and "giving more responsibilities to the developer", the push back is always, "How can I verify what the developers are doing is correct?" and "What will my role be in this new process?". One of the key elements of driving a developer-first security mindset is bringing developers and security closer together, working in tandem, versus being two separate entities involved at different software lifecycle stages. Being developer-first absolutely doesn't mean security sitting aside and watching along. It's about giving data to developers first, in a meaningful and purposeful way where they can take action quickly and efficiently. Then providing data to security personnel to have more data-driven conversations with developers, ensuring what they are doing is in the company's best interests.

Security Overview is the beginning of that data-driven journey between security and developers. Security overview allows you to answer questions such as:

  • Show me the top ten repositories which leak the most secrets
  • Show me the top ten repositories that leak the most code scanning alerts, and focus on critical repositories in risk
  • Show me the total number of Azure secrets which have been leaked and the repositories they have been leaked in
  • Show me the total number of JavaScript SQL Injections and the repositories they have been found in

The value of the above is that you can now create targeted educational campaigns for the repositories that need the most guidance. You can even make specific communication plans only aimed at specific repositories which require contact. You create a much more personalized feel.

You may even have your own SEIM tool like Splunk, Datadog, Sentinel, which means Security Overview may be useful for specific use cases, but there may be more data points that these SEIM tools can provide than Security Overview can right now. This is not a problem. Use the API's and Webhooks (or even native third-party integrations) to integrate with the tool of your choice! Security Overview continues to mature, but it has been so exciting to see the cultural and collaboration changes that Security Overview has promoted.

Conclusion:

To conclude, the above does not take away what you would expect from a good security tool, e.g. good high-quality results. But there is more to a security tool than the number of results found. Security has shifted from a security-first persona industry to a security and developer-first industry. Therefore, we need to provide tools and processes that complement both personas, not just one. The developers are the people you expect to fix these results, so let's make sure the experiences provided to these developers are aimed at them, whilst ensuring security have the data to verify the developers are doing what's in the company's best interest.

Reflecting on one of my first phrases in this article, "There are no right or wrong approaches to picking a security tool". I stick by that phrase. Every company has its own beliefs and criteria on what's most important to them. Still, the next time you think about changing/adding/updating developer tooling, especially security, consider more than just the results and data. Think about the experiences you want to create and the outcomes you want to foster.


The Importance of Developer Experience - Developer roles are evolving!

Introduction:

Over the past five years, I have had the pleasure of working within different developer communities within various organizations. These experiences are where I captured my passion for developer experience and Dev(Sec)Ops. Making life as easy as possible for developers and ensuring the right tools are in place is essential to improve developer retention and business outcomes. Nowadays, this is even more imperative because the role of a developer is no longer just software development; it's starting to become key within every area of the SDLC.

The evolution of a developer ... :

What do I mean by this? Let's quickly discuss the evolution.

Traditionally, a business analyst would hand a requirement over to a developer. That developer would then carry out the software development of that requirement. The developer would then hand it over to a tester to write unit tests, regression tests, etc. You would repeat that process until all requirements are completed. Then, a security engineer would run a security test, and they would attempt to understand which are false positives and need to be fixed. Once all vulnerabilities are solved, quality would come in and work with the business/technical analyst to ensure all documentation are complete. Then, you can release!

What's a theme you are seeing out of this way of working? It's very waterfall, messy and there are so many opportunities that can slow the release.

Now, a more modern approach could be:

A scrum master works with a developer to draft user stories that meet the requirement. The developer would carry out the actual software development. Once complete, they would then write unit tests (maybe even UI tests if it's a GUI and some regression tests). Alongside testing, the developer gets SCA and SAST reports of any vulnerabilities, fixing them during development if any arise. Additionally, the developer is updating design documentation, README's, CHANGELOG's, etc. During this process, the security engineer, quality consultant, business/systems analyst is ensuring its meeting requirements (DevSecOps), meaning just before release, there are no problems, and you can release straight away!

So, what's the main difference you see between the two? Straight away, you see the developer (or developers) are carrying out a lot more of the tasks within the SLDC lifecycle. This doesn't mean the other roles aren't involved or important; it just means there is a shift in perspective from who is primarily doing the work. You may still have testers for large applications, but the developer developing the feature/bug request is writing the initial tests. Similarly, you may (and should) have a dedicated security engineer on hand to assist with vulnerabilities that the developer needs advice about. Still, the developer is going in and remediating the vulnerabilities. These two examples highlight the shift left in nature (nothing new) in how development is done.

Why? Why is this evolving in this way? Let's discuss:

  • The most important one is, the developer knows the codebase the best. I'll give two examples of why this is critical:
    • Let's say you have a centralized testing team that provides unit/regression/UI testing capabilities. The developer does the work; they hand off to a tester to write the test(s). There is a one-two day SLA for that testing to be picked up. Then the tester has to get up to speed with the feature/bug that has been written/fixed. The SLA largely depends on the size of the change but can take anywhere from an hour to a day. The actual development of the tests likely take the same amount of time, so no added time there. The work then gets PR'd, which needs another review by the developer who wrote the code to ensure it meets the requirement of the bug/feature. Again, adding to the total SLA. In comparison, if the developer who wrote the code write the tests as well, there is 0 SLA in the handover between the testers, 0 SLA in getting up to speed with the codebase and 0 SLA in the review, as the tests can go into the main PR into the dev/qa/main branch. Think about this at scale; there is SO much time saving, which equates to quicker value for the business.
    • Another example, security. Traditionally, a security person would look through a report and say, "Hey, you need to fix all critical and high vulnerabilities". The developer would pivot the conversation to fixing the vulnerabilities that had the highest risk on the most critical part of the codebase. However, the conversation would typically end in all critical and high vulnerabilities needing remediating, if not all of them. Although controversial, this is highly inefficient and generally wastes a considerable amount of time. Why? Just because a vulnerability is flagged as critical, it doesn't mean it's critical to your application. On the other hand, you may have a medium severity that directly affects the most important aspect of your application, with the highest surface area. This is why the developer who wrote the code takes accountability for reviewing and remediating the vulnerabilities of most criticality to the application, not based on the potential effect. This leads to quicker release cycles and more secure software as developers fix vulnerabilities that count early!
  • The second reason developers are becoming more involved is due to more and more aspects of the SLDC ... well ... they are becoming code. Think about traditional development; you would build software and hand it off to an infrastructure person to deploy it. Nowadays, the developer is that infrastructure person. The developer writes the code for the feature but then writes the code for the IaC, which supports that feature. You are also seeing CI/CD becoming more config as code. Like the feature example above, the developer writes the feature, then the IaC, and any additions to the CI/CD landscape. More and more aspects of development are becoming code, which requires a developer.

So, the above talks about the evolution of a developer, which paints a picture of why developer experience is so important. If the developer is doing more, you naturally would like to maximize velocity to get the best value. However, most importantly, you want to maximize how happy they are. A happy developer = better retention + productivity. I truly believe the more emphasis you put on developer experience, the more you will get back. So, what can do you do to improve the developer experience? Below, I will discuss my three core principles when it comes to developer experience.

Developer First Toolset

Stand up tooling that has developer-first principles. When you put more on developers and follow a DevSecOps approach, it's critical to stand up tooling within the toolchain that focuses on developers.

More and more tools are starting to put developers at the heart of what they do, which paves the way for increased productivity and experience (we have discussed why this is important above). Some great examples of tooling which should be developer-first are:

  • Security
  • CI
  • CD
  • Quality
  • Testing

You wouldn't hand a scientist equipment that didn't make it easier for the scientist to use? You wouldn't have a medic devices that were hard to use and make their life hard? So why would you hand developers tooling that doesn't make the life of a developer easy?

If you're a company looking to change the tooling to be more developer-focused, ensure you have developer(s) involved in the decision making.

Frictionless Processes

Focus on the process, not just the tools. Having the right tools are essential, but if you make them hard to use or implement them in a way that introduces friction, you won't be getting the most out of the tools. What can you do to ensure you have good foundations?

  • Automate: Try not to put any manual requests or SLA's on setup. Use the API's, Webhooks, etc., provided by tools to get set up in an automated fashion.
  • Provide suitable levels of access: Try not to limit access to tools, especially for the reason of just in case. Provide people with the autonomy to read, write and administer their solutions. There are times where you have to limit access, especially in large enterprises, but do so for the right reasons, not just for the sake of not knowing.
  • Open API's: The most frustrating thing for a developer is when they are restricted on the art of the possible because API's are disabled due to the team managing the tooling worrying about what teams may do, which they don't know about. As above, there may be good reasons (security, etc.), but unless something is blocking you, open API's and empower teams to be creative.

These are just a few examples. The main point to get across is automation. Automation is a great way to unblock friction. It is especially important regarding the DevOps Toolchain. Ensure that it's easy to get data between tool A and tool B. Automation and interconnectivity are essential to success, whether that's a testing tool to a project management tool or a security tool to a CI tool. Ensure you think about your process, not just the tools.

Empowerment & Trust

The last one focused on empowerment and trust. This behaviour generally gets overlooked, as it's easier to focus on the tools because you can measure and quantity success easier. However, there are simple steps you can make to help drive a more open culture among developers:

  • Don't push back on developers for the sake of wanting to share an opinion. I have seen many scenarios where a developer shares a great thought but gets questioned or disputed by someone who doesn't know the area but wants to share a thought to make sure they are in the conversation. Everyone should share ideas, collaborate, and be open, but ensure developers voices are heard and listened to. This is a simple behaviour you can adopt but will make a huge difference.
  • I mentioned this above in the process section, but nowadays (especially in larger enterprises), access is restricted, API's are disabled (and I know what you're thinking, but what about security? It would be best to never compromise on security, but think about what automated processes you can use to keep to a high standard of security but still enabling access and API's). E.g. key rotation, automatic access reviews, etc. The more you give to a developer, the more they will feel empowered and trusted, which will boost morale.

Focus on your people, and have trust in your developers.