Type Your Question
How to integrate Google Cloud Build with other GCP services?
Wednesday, 19 March 2025GOOGLE
Google Cloud Build (Cloud Build) is a powerful service on Google Cloud Platform (GCP) that automates your CI/CD pipelines. Its strength lies not only in its core functionality of building and testing code, but also in its seamless integration with a wide array of other GCP services. This integration allows for streamlined workflows, enhanced security, and efficient resource management across your entire development lifecycle.
Understanding the Benefits of Integration
Integrating Cloud Build with other GCP services unlocks a multitude of benefits:
- Automated Deployments: Directly deploy applications to environments like Google Kubernetes Engine (GKE), Cloud Functions, and App Engine without manual intervention.
- Enhanced Security: Leverage Identity and Access Management (IAM) to grant fine-grained permissions to your build processes, ensuring secure access to resources.
- Efficient Artifact Management: Store build artifacts like container images and Java archives in secure and reliable repositories like Artifact Registry or Container Registry.
- Streamlined Data Processing: Trigger data processing pipelines in services like Cloud Dataflow based on build events.
- Centralized Logging and Monitoring: Aggregate logs and metrics from your builds and integrated services in Cloud Logging and Cloud Monitoring for comprehensive observability.
- Infrastructure as Code (IaC) Automation: Use Cloud Build to provision and manage infrastructure using tools like Terraform, promoting a declarative and repeatable approach.
Key GCP Services for Cloud Build Integration
Here's a look at some of the most common and valuable GCP services you can integrate with Cloud Build, along with practical examples:
1. Google Kubernetes Engine (GKE)
Integrating Cloud Build with GKE enables automated deployments to your Kubernetes clusters. This is a cornerstone of many modern CI/CD pipelines.
Use Case: Deploying a containerized application to a GKE cluster upon a successful code commit.
How it works:
- Code is committed to a repository (e.g., Cloud Source Repositories, GitHub, Bitbucket).
- Cloud Build is triggered by a webhook.
- Cloud Build retrieves the code.
- Cloud Build builds a Docker image using
docker build
. - Cloud Build pushes the image to Artifact Registry.
- Cloud Build uses
kubectl apply
or a similar tool to update the Kubernetes deployment configuration with the new image tag.
Example cloudbuild.yaml
:
steps:
# Build the Docker image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'us-central1-docker.pkg.dev/${PROJECT_ID}/my-repository/my-image:${TAG_NAME}', '.']
# Push the Docker image to Artifact Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/${PROJECT_ID}/my-repository/my-image:${TAG_NAME}']
# Deploy to GKE using kubectl
- name: 'gcr.io/cloud-builders/kubectl'
args: ['set', 'image', 'deployment/my-deployment', 'my-container=us-central1-docker.pkg.dev/${PROJECT_ID}/my-repository/my-image:${TAG_NAME}', '-n', 'my-namespace']
env:
- 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
- 'CLOUDSDK_CONTAINER_CLUSTER=my-cluster'
images:
- 'us-central1-docker.pkg.dev/${PROJECT_ID}/my-repository/my-image:${TAG_NAME}'
options:
substitutionOption: 'ALLOW_LOOSE'
Explanation:
- The first step builds the Docker image and tags it. Crucially, the image name incorporates Artifact Registry for storing the image. Make sure you have Artifact Registry set up, including the repository.
- The second step pushes the built image to Artifact Registry.
- The third step deploys the application to the GKE cluster using
kubectl
. This step requireskubectl
to be authenticated against your cluster, usually by having appropriate IAM permissions. You will also likely need to grant Cloud Build service account appropriate roles to access your GKE cluster (e.g.,roles/container.developer
).${TAG_NAME}
and${PROJECT_ID}
are Cloud Build substitution variables automatically provided, or you can define your own. SettingsubstitutionOption: 'ALLOW_LOOSE'
allows builds to proceed even if substitution variables are not defined (helpful during local testing). - The
images
section tells Cloud Build to track these images and rebuild if the underlying Dockerfile changes.
2. Cloud Functions
Cloud Build can be used to automatically deploy serverless functions written in languages like Python, Node.js, or Go.
Use Case: Automatically deploy a Cloud Function each time code is committed to a function's source directory.
How it works:
- Code is committed to the Cloud Function's source directory.
- Cloud Build is triggered.
- Cloud Build packages the function's code.
- Cloud Build deploys the function using the
gcloud functions deploy
command.
Example cloudbuild.yaml
:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:latest'
args:
- 'gcloud'
- 'functions'
- 'deploy'
- 'my-function'
- '--runtime'
- 'python39' #Or any supported runtime
- '--trigger-http' # Or any appropriate trigger
- '--region'
- 'us-central1'
- '--source=.' # current working directory
Explanation:
- This example uses the Cloud SDK builder image to deploy a Cloud Function named 'my-function'. Ensure the current directory (
--source=.
) contains your function's source code. - The
gcloud functions deploy
command includes the necessary arguments to specify the runtime, trigger type, and region of the function. - Again, the Cloud Build service account needs appropriate IAM roles to deploy Cloud Functions (e.g.,
roles/cloudfunctions.developer
).
3. Artifact Registry & Container Registry
Cloud Build uses container registries like Artifact Registry or the older Container Registry to store Docker images created during builds.
Use Case: Storing container images and build artifacts for later deployment and version control.
How it works:
- Cloud Build builds a Docker image.
- Cloud Build tags the image with a specific tag (e.g., a version number or commit hash).
- Cloud Build pushes the image to Artifact Registry/Container Registry.
Artifact Registry (Recommended): Offers more granular permissions, regional storage options and improved organization. The earlier GKE example above already demonstrates use of Artifact Registry. Make sure Artifact Registry API is enabled and the Cloud Build service account has the necessary permissions (e.g., roles/artifactregistry.writer).
Container Registry (Legacy): Is deprecated and it is recommended to migrate to Artifact Registry.
4. Cloud Storage
Cloud Build can store arbitrary files and artifacts in Cloud Storage buckets. This is useful for storing build logs, configuration files, and other assets generated during the build process.
Use Case: Archiving build logs and test results in a Cloud Storage bucket.
How it works:
- Cloud Build performs a build.
- Cloud Build uses the
gsutil cp
command to copy the desired files to a Cloud Storage bucket.
Example cloudbuild.yaml
:
steps:
# Your build steps here...
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'build.log', 'gs://my-bucket/builds/${BUILD_ID}/build.log']
Explanation:
- This example uses the
gsutil
builder image to copy thebuild.log
file to a Cloud Storage bucket namedmy-bucket
, organizing the logs by build ID (${BUILD_ID}
is a Cloud Build substitution variable). - Ensure your Cloud Build service account has the
roles/storage.objectCreator
role on the target bucket.
5. Cloud SQL
While direct Cloud SQL integration isn't as straightforward as others, Cloud Build can interact with Cloud SQL instances to perform database migrations, seed data, or run other administrative tasks.
Use Case: Automating database schema updates during deployments.
How it works:
- Cloud Build is triggered by a code change.
- Cloud Build retrieves your database migration scripts (e.g., SQL files).
- Cloud Build uses a builder (e.g., a custom Docker image or a builder using the
gcloud
command) to execute the migration scripts against your Cloud SQL instance. This typically involves authentication.
Example (using gcloud to execute SQL):
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:latest'
args:
- 'gcloud'
- 'sql'
- 'databases'
- 'create'
- 'mydb'
- '--instance=my-cloudsql-instance'
- '--project=${PROJECT_ID}'
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:latest'
args:
- 'gcloud'
- 'sql'
- 'databases'
- 'import'
- 'mydb'
- '--instance=my-cloudsql-instance'
- '--project=${PROJECT_ID}'
- '--uri=gs://my-bucket/schema.sql'
Explanation:
- This snippet demonstrates creating a database and then importing a schema from Cloud Storage.
- Replace
my-cloudsql-instance
with the actual name of your Cloud SQL instance. - The Cloud Build service account will need the roles/cloudsql.client and roles/storage.objectViewer (for reading the schema file) roles, and roles/cloudsql.instanceAdmin if performing tasks like creating databases.
6. Identity and Access Management (IAM)
IAM is critical for securing your Cloud Build pipelines and granting appropriate permissions to integrated services.
Use Case: Restricting access to resources and operations based on the principle of least privilege.
How it works:
- The Cloud Build service account (
[email protected]
) is the identity used by your builds. - Grant the Cloud Build service account the specific IAM roles needed to access and interact with other GCP services. Examples have been given throughout this document.
- Avoid granting overly permissive roles like
roles/owner
. Instead, assign granular roles likeroles/container.developer
(for GKE),roles/cloudfunctions.developer
(for Cloud Functions), androles/storage.objectCreator
(for Cloud Storage).
7. Cloud Logging & Cloud Monitoring
Cloud Logging and Cloud Monitoring provide invaluable tools for observing your Cloud Build processes and the behavior of integrated services.
Use Case: Monitoring build performance, detecting errors, and analyzing the impact of deployments.
How it works:
- Cloud Build automatically sends logs to Cloud Logging. You can filter and analyze these logs to troubleshoot issues and track build progress.
- You can define custom metrics in Cloud Monitoring based on build events or the performance of deployed applications.
- Alerts can be configured to notify you of critical events, such as build failures or performance regressions.
Best Practices for Integrating Cloud Build
To maximize the effectiveness of your Cloud Build integrations, follow these best practices:
- Use Infrastructure as Code (IaC): Manage your infrastructure and deployments using tools like Terraform. Cloud Build can then automate the application of Terraform configurations.
- Parameterize Your Builds: Utilize Cloud Build's substitution variables to make your build configurations more flexible and reusable.
- Test Thoroughly: Include automated tests in your build pipelines to ensure code quality and prevent regressions.
- Monitor Your Builds: Regularly monitor your builds and deployed applications to identify and address issues promptly.
- Secure Your Pipelines: Use IAM to restrict access to resources and secrets, and regularly audit your Cloud Build configurations.
Conclusion
Integrating Google Cloud Build with other GCP services unlocks significant benefits for your CI/CD workflows. By automating deployments, enhancing security, and streamlining resource management, you can accelerate your development cycles and deliver high-quality applications more efficiently. By utilizing services like GKE, Cloud Functions, Artifact Registry, and Cloud Storage, along with the right IAM roles and a focus on observability, you can create robust and reliable cloud-native solutions. Remember to prioritize security and adopt Infrastructure as Code principles for a manageable and scalable system.
Cloud Build Integration GCP Services Cloud Storage Cloud Run 
Related