Cloud runner v0.2 - continued quality of life improvements (#387)

* Update cloud-runner-aws-pipeline.yml

* Update cloud-runner-k8s-pipeline.yml

* yarn build

* yarn build

* correct branch ref

* correct branch ref passed to target repo

* Create k8s-tests.yml

* Delete k8s-tests.yml

* correct branch ref passed to target repo

* correct branch ref passed to target repo

* Always describe AWS tasks for now, because unstable error handling

* Remove unused tree commands

* Use lfs guid sum

* Simple override cache push

* Simple override cache push and pull override to allow pure cloud storage driven caching

* Removal of early branch (breaks lfs caching)

* Remove unused tree commands

* Update action.yml

* Update action.yml

* Support cache and input override commands as input + full support custom hooks

* Increase k8s timeout

* replace filename being appended for unknclear reason

* cache key should not contain whitespaces

* Always try and deploy rook for k8s

* Apply k8s files for rook

* Update action.yml

* Apply k8s files for rook

* Apply k8s files for rook

* cache test and action description for kuber storage class

* Correct test and implement dependency health check and start

* GCP-secret run, cache key

* lfs smudge set explicit and undo explicit

* Run using external secret provider to speed up input

* Update cloud-runner-aws-pipeline.yml

* Add nodejs as build step dependency

* Add nodejs as build step dependency

* Cloud Runner Tests must be specified to capture logs from cloud runner for tests

* Cloud Runner Tests must be specified to capture logs from cloud runner for tests

* Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs

* Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs

* Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs

* Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs

* Refactor and cleanup - no async input, combined setup/build, removed github logs for cli runs

* better defaults for new inputs

* better defaults

* merge latest

* force build update

* use npm n to update node in unity builder

* use npm n to update node in unity builder

* use npm n to update node in unity builder

* correct new line

* quiet zipping

* quiet zipping

* default secrets for unity username and password

* default secrets for unity username and password

* ls active directory before lfs install

* Get cloud runner secrets from

* Get cloud runner secrets from

* Cleanup setup of default secrets

* Various fixes

* Cleanup setup of default secrets

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* Various fixes

* AWS secrets manager support

* less caching logs

* default k8s storage class to pd-standard

* more readable build commands

* Capture aws exit code 1 reliably

* Always replace /head from branch

* k8s default storage class to standard-rwo

* cleanup

* further cleanup input

* further cleanup input

* further cleanup input

* further cleanup input

* further cleanup input

* folder sizes to inspect caching

* dir command for local cloud runner test

* k8s wait for pending because pvc will not create earlier

* prefer k8s standard storage

* handle empty string as cloud runner cluster input

* local-system is now used for cloud runner test implementation AND correctly unset test CLI input

* local-system is now used for cloud runner test implementation AND correctly unset test CLI input

* fix unterminated quote

* fix unterminated quote

* do not share build parameters in tests - in cloud runner this will cause conflicts with resouces of the same name

* remove head and heads from branch prefix

* fix reversed caching direction of cache-push

* fixes

* fixes

* fixes

* cachePull cli

* fixes

* fixes

* fixes

* fixes

* fixes

* order cache test to be first

* order cache test to be first

* fixes

* populate cache key instead of using branch

* cleanup cli

* garbage-collect-aws cli can iterate over aws resources and cli scans all ts files

* import cli methods

* import cli files explicitly

* import cli files explicitly

* import cli files explicitly

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* import cli methods

* log parameters in cloud runner parameter test

* log parameters in cloud runner parameter test

* log parameters in cloud runner parameter test

* Cloud runner param test before caching because we have a fast local cache test now

* Using custom build path relative to repo root rather than project root

* aws-garbage-collect at end of pipeline

* aws-garbage-collect do not actually delete anything for now - just list

* remove some legacy du commands

* Update cloud-runner-aws-pipeline.yml

* log contents after cache pull and fix some scenarios with duplicate secrets

* log contents after cache pull and fix some scenarios with duplicate secrets

* log contents after cache pull and fix some scenarios with duplicate secrets

* PR comments

* Replace guid with uuid package

* use fileExists lambda instead of stat to check file exists in caching

* build failed results in core error message

* Delete sample.txt

* cloud-runner-system prefix changed to cloud-runner

* Update cloud-runner-aws-pipeline.yml

* remove du from caching, should run manually if interested in size, adds too much runtime to job to include by default

* github ephemeral pipeline support

* github ephemeral pipeline support

* Merge remote-tracking branch 'origin/main' into cloud-runner-develop

# Conflicts:
#	dist/index.js.map
#	src/model/cloud-runner/providers/aws/aws-task-runner.ts
#	src/model/cloud-runner/providers/aws/index.ts

* garbage collection

* garbage collection

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* self hosted runner pipeline

* ephemeral runner pipeline

* ephemeral runner pipeline

* ephemeral runner pipeline

* download runner each time

* download runner each time

* download runner each time

* garbage collect all older than 1d as part of cleanup

* download runner each time

* number container cpu and memory for aws

* per provider container defaults

* per provider container defaults

* per provider container defaults

* per provider container defaults

* Skip printing size unless cloudRunnerIntegrationTests is true

* transition zip usage in cache to uncompressed tar for speed

* transition zip usage in cache to uncompressed tar for speed

* transition zip usage in cache to uncompressed tar for speed

* transition zip usage in cache to uncompressed tar for speed

* per provider container defaults

* per provider container defaults

* per provider container defaults

* per provider container defaults

* per provider container defaults

* per provider container defaults

* per provider container defaults

* per provider container defaults

* baked in cloud formation template

* baked in cloud formation template

* baked in cloud formation template

* baked in cloud formation template

* baked in cloud formation template

* baked in cloud formation template

* baked in cloud formation template

* baked in cloud formation template

* better aws commands

* better aws commands

* parse number for cloud formation template

* remove container resource defaults from actions yaml

* remove container resource defaults from actions yaml

* skip all input readers when cloud runner is local

* prefer fs/promises

* actually set aws cloud runner step as failure if unity build fails

* default to 3gb of ram - webgl fails on 2
This commit is contained in:
Frostebite 2022-04-22 00:47:45 +01:00 committed by GitHub
parent 5ae03dfef6
commit 8abce48a48
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
26 changed files with 240 additions and 789 deletions

View File

@ -12,3 +12,26 @@ jobs:
with: with:
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
expire-in: 21 days expire-in: 21 days
cleanupCloudRunner:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
if: github.event.event_type != 'pull_request_target'
with:
lfs: true
- uses: actions/setup-node@v2
with:
node-version: 12.x
- run: yarn
- run: yarn run cli -m aws-list-tasks
env:
AWS_REGION: eu-west-2
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: eu-west-2
- run: yarn run cli -m aws-list-stacks
env:
AWS_REGION: eu-west-2
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: eu-west-2

View File

@ -37,9 +37,9 @@ jobs:
#- StandaloneOSX # Build a macOS standalone (Intel 64-bit). #- StandaloneOSX # Build a macOS standalone (Intel 64-bit).
- StandaloneWindows64 # Build a Windows 64-bit standalone. - StandaloneWindows64 # Build a Windows 64-bit standalone.
- StandaloneLinux64 # Build a Linux 64-bit standalone. - StandaloneLinux64 # Build a Linux 64-bit standalone.
- WebGL # WebGL.
#- iOS # Build an iOS player. #- iOS # Build an iOS player.
#- Android # Build an Android .apk. #- Android # Build an Android .apk.
#- WebGL # WebGL.
# - StandaloneWindows # Build a Windows standalone. # - StandaloneWindows # Build a Windows standalone.
# - WSAPlayer # Build an Windows Store Apps player. # - WSAPlayer # Build an Windows Store Apps player.
# - PS4 # Build a PS4 Standalone. # - PS4 # Build a PS4 Standalone.
@ -91,7 +91,7 @@ jobs:
aws s3 ls game-ci-test-storage aws s3 ls game-ci-test-storage
ls /data/cache/$CACHE_KEY ls /data/cache/$CACHE_KEY
ls /data/cache/$CACHE_KEY/build ls /data/cache/$CACHE_KEY/build
aws s3 cp /data/cache/$CACHE_KEY/build/build-$BUILD_GUID.zip s3://game-ci-test-storage/$CACHE_KEY/build-$BUILD_GUID.zip aws s3 cp /data/cache/$CACHE_KEY/build/build-$BUILD_GUID.tar s3://game-ci-test-storage/$CACHE_KEY/build-$BUILD_GUID.tar
secrets: secrets:
- name: awsAccessKeyId - name: awsAccessKeyId
value: ${{ secrets.AWS_ACCESS_KEY_ID }} value: ${{ secrets.AWS_ACCESS_KEY_ID }}
@ -100,7 +100,7 @@ jobs:
- name: awsDefaultRegion - name: awsDefaultRegion
value: eu-west-2 value: eu-west-2
- run: | - run: |
aws s3 cp s3://game-ci-test-storage/${{ steps.aws-fargate-unity-build.outputs.CACHE_KEY }}/build-${{ steps.aws-fargate-unity-build.outputs.BUILD_GUID }}.zip build-${{ steps.aws-fargate-unity-build.outputs.BUILD_GUID }}.zip aws s3 cp s3://game-ci-test-storage/${{ steps.aws-fargate-unity-build.outputs.CACHE_KEY }}/build-${{ steps.aws-fargate-unity-build.outputs.BUILD_GUID }}.tar build-${{ steps.aws-fargate-unity-build.outputs.BUILD_GUID }}.tar
ls ls
- run: yarn run cli -m aws-garbage-collect - run: yarn run cli -m aws-garbage-collect
########################### ###########################
@ -110,5 +110,5 @@ jobs:
- uses: actions/upload-artifact@v2 - uses: actions/upload-artifact@v2
with: with:
name: AWS Build (${{ matrix.targetPlatform }}) name: AWS Build (${{ matrix.targetPlatform }})
path: build-${{ steps.aws-fargate-unity-build.outputs.BUILD_GUID }}.zip path: build-${{ steps.aws-fargate-unity-build.outputs.BUILD_GUID }}.tar
retention-days: 14 retention-days: 14

View File

@ -105,8 +105,8 @@ jobs:
aws s3 ls aws s3 ls
aws s3 ls game-ci-test-storage aws s3 ls game-ci-test-storage
ls /data/cache/$CACHE_KEY ls /data/cache/$CACHE_KEY
echo "/data/cache/$CACHE_KEY/build/build-$BUILD_GUID.zip s3://game-ci-test-storage/$CACHE_KEY/$BUILD_FILE" echo "/data/cache/$CACHE_KEY/build/build-$BUILD_GUID.tar s3://game-ci-test-storage/$CACHE_KEY/$BUILD_FILE"
aws s3 cp /data/cache/$CACHE_KEY/build/build-$BUILD_GUID.zip s3://game-ci-test-storage/$CACHE_KEY/build-$BUILD_GUID.zip aws s3 cp /data/cache/$CACHE_KEY/build/build-$BUILD_GUID.tar s3://game-ci-test-storage/$CACHE_KEY/build-$BUILD_GUID.tar
secrets: secrets:
- name: awsAccessKeyId - name: awsAccessKeyId
value: ${{ secrets.AWS_ACCESS_KEY_ID }} value: ${{ secrets.AWS_ACCESS_KEY_ID }}
@ -115,7 +115,7 @@ jobs:
- name: awsDefaultRegion - name: awsDefaultRegion
value: eu-west-2 value: eu-west-2
- run: | - run: |
aws s3 cp s3://game-ci-test-storage/${{ steps.k8s-unity-build.outputs.CACHE_KEY }}/build-${{ steps.k8s-unity-build.outputs.BUILD_GUID }}.zip build-${{ steps.k8s-unity-build.outputs.BUILD_GUID }}.zip aws s3 cp s3://game-ci-test-storage/${{ steps.k8s-unity-build.outputs.CACHE_KEY }}/build-${{ steps.k8s-unity-build.outputs.BUILD_GUID }}.tar build-${{ steps.k8s-unity-build.outputs.BUILD_GUID }}.tar
ls ls
########################### ###########################
# Upload # # Upload #
@ -124,5 +124,5 @@ jobs:
- uses: actions/upload-artifact@v2 - uses: actions/upload-artifact@v2
with: with:
name: K8s Build (${{ matrix.targetPlatform }}) name: K8s Build (${{ matrix.targetPlatform }})
path: build-${{ steps.k8s-unity-build.outputs.BUILD_GUID }}.zip path: build-${{ steps.k8s-unity-build.outputs.BUILD_GUID }}.tar
retention-days: 14 retention-days: 14

View File

@ -115,11 +115,11 @@ inputs:
required: false required: false
description: 'Either local, k8s or aws can be used to run builds on a remote cluster. Additional parameters must be configured.' description: 'Either local, k8s or aws can be used to run builds on a remote cluster. Additional parameters must be configured.'
cloudRunnerCpu: cloudRunnerCpu:
default: '1.0' default: ''
required: false required: false
description: 'Amount of CPU time to assign the remote build container' description: 'Amount of CPU time to assign the remote build container'
cloudRunnerMemory: cloudRunnerMemory:
default: '750M' default: ''
required: false required: false
description: 'Amount of memory to assign the remote build container' description: 'Amount of memory to assign the remote build container'
cachePushOverrideCommand: cachePushOverrideCommand:

View File

@ -1,416 +0,0 @@
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS Fargate cluster that can span public and private subnets. Supports
public facing load balancers, private internal load balancers, and
both internal and external service discovery namespaces.
Parameters:
EnvironmentName:
Type: String
Default: development
Description: "Your deployment environment: DEV, QA , PROD"
Version:
Type: String
Description: "hash of template"
# ContainerPort:
# Type: Number
# Default: 80
# Description: What port number the application inside the docker container is binding to
Mappings:
# Hard values for the subnet masks. These masks define
# the range of internal IP addresses that can be assigned.
# The VPC can have all IP's from 10.0.0.0 to 10.0.255.255
# There are four subnets which cover the ranges:
#
# 10.0.0.0 - 10.0.0.255
# 10.0.1.0 - 10.0.1.255
# 10.0.2.0 - 10.0.2.255
# 10.0.3.0 - 10.0.3.255
SubnetConfig:
VPC:
CIDR: '10.0.0.0/16'
PublicOne:
CIDR: '10.0.0.0/24'
PublicTwo:
CIDR: '10.0.1.0/24'
Resources:
# VPC in which containers will be networked.
# It has two public subnets, and two private subnets.
# We distribute the subnets across the first two available subnets
# for the region, for high availability.
VPC:
Type: AWS::EC2::VPC
Properties:
EnableDnsSupport: true
EnableDnsHostnames: true
CidrBlock: !FindInMap ['SubnetConfig', 'VPC', 'CIDR']
EFSServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: "efs-server-endpoints"
GroupDescription: Which client ip addrs are allowed to access EFS server
VpcId: !Ref 'VPC'
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 2049
ToPort: 2049
SourceSecurityGroupId: !Ref ContainerSecurityGroup
#CidrIp: !FindInMap ['SubnetConfig', 'VPC', 'CIDR']
# A security group for the containers we will run in Fargate.
# Rules are added to this security group based on what ingress you
# add for the cluster.
ContainerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: "task security group"
GroupDescription: Access to the Fargate containers
VpcId: !Ref 'VPC'
# SecurityGroupIngress:
# - IpProtocol: tcp
# FromPort: !Ref ContainerPort
# ToPort: !Ref ContainerPort
# CidrIp: 0.0.0.0/0
SecurityGroupEgress:
- IpProtocol: -1
FromPort: 2049
ToPort: 2049
CidrIp: "0.0.0.0/0"
# Two public subnets, where containers can have public IP addresses
PublicSubnetOne:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Select
- 0
- Fn::GetAZs: !Ref 'AWS::Region'
VpcId: !Ref 'VPC'
CidrBlock: !FindInMap ['SubnetConfig', 'PublicOne', 'CIDR']
# MapPublicIpOnLaunch: true
PublicSubnetTwo:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Select
- 1
- Fn::GetAZs: !Ref 'AWS::Region'
VpcId: !Ref 'VPC'
CidrBlock: !FindInMap ['SubnetConfig', 'PublicTwo', 'CIDR']
# MapPublicIpOnLaunch: true
# Setup networking resources for the public subnets. Containers
# in the public subnets have public IP addresses and the routing table
# sends network traffic via the internet gateway.
InternetGateway:
Type: AWS::EC2::InternetGateway
GatewayAttachement:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref 'VPC'
InternetGatewayId: !Ref 'InternetGateway'
# Attaching a Internet Gateway to route table makes it public.
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref 'VPC'
PublicRoute:
Type: AWS::EC2::Route
DependsOn: GatewayAttachement
Properties:
RouteTableId: !Ref 'PublicRouteTable'
DestinationCidrBlock: '0.0.0.0/0'
GatewayId: !Ref 'InternetGateway'
# Attaching a public route table makes a subnet public.
PublicSubnetOneRouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnetOne
RouteTableId: !Ref PublicRouteTable
PublicSubnetTwoRouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnetTwo
RouteTableId: !Ref PublicRouteTable
# ECS Resources
ECSCluster:
Type: AWS::ECS::Cluster
# A role used to allow AWS Autoscaling to inspect stats and adjust scaleable targets
# on your AWS account
AutoscalingRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [application-autoscaling.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
Policies:
- PolicyName: service-autoscaling
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'application-autoscaling:*'
- 'cloudwatch:DescribeAlarms'
- 'cloudwatch:PutMetricAlarm'
- 'ecs:DescribeServices'
- 'ecs:UpdateService'
Resource: '*'
# This is an IAM role which authorizes ECS to manage resources on your
# account on your behalf, such as updating your load balancer with the
# details of where your containers are, so that traffic can reach your
# containers.
ECSRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [ecs.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
Policies:
- PolicyName: ecs-service
PolicyDocument:
Statement:
- Effect: Allow
Action:
# Rules which allow ECS to attach network interfaces to instances
# on your behalf in order for awsvpc networking mode to work right
- 'ec2:AttachNetworkInterface'
- 'ec2:CreateNetworkInterface'
- 'ec2:CreateNetworkInterfacePermission'
- 'ec2:DeleteNetworkInterface'
- 'ec2:DeleteNetworkInterfacePermission'
- 'ec2:Describe*'
- 'ec2:DetachNetworkInterface'
# Rules which allow ECS to update load balancers on your behalf
# with the information sabout how to send traffic to your containers
- 'elasticloadbalancing:DeregisterInstancesFromLoadBalancer'
- 'elasticloadbalancing:DeregisterTargets'
- 'elasticloadbalancing:Describe*'
- 'elasticloadbalancing:RegisterInstancesWithLoadBalancer'
- 'elasticloadbalancing:RegisterTargets'
Resource: '*'
# This is a role which is used by the ECS tasks themselves.
ECSTaskExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [ecs-tasks.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
Policies:
- PolicyName: AmazonECSTaskExecutionRolePolicy
PolicyDocument:
Statement:
- Effect: Allow
Action:
# Allow the use of secret manager
- 'secretsmanager:GetSecretValue'
- 'kms:Decrypt'
# Allow the ECS Tasks to download images from ECR
- 'ecr:GetAuthorizationToken'
- 'ecr:BatchCheckLayerAvailability'
- 'ecr:GetDownloadUrlForLayer'
- 'ecr:BatchGetImage'
# Allow the ECS tasks to upload logs to CloudWatch
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: '*'
DeleteCFNLambdaExecutionRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service: ["lambda.amazonaws.com"]
Action: "sts:AssumeRole"
Path: "/"
Policies:
- PolicyName: DeleteCFNLambdaExecutionRole
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "logs:CreateLogGroup"
- "logs:CreateLogStream"
- "logs:PutLogEvents"
Resource: "arn:aws:logs:*:*:*"
- Effect: "Allow"
Action:
- "cloudformation:DeleteStack"
- "kinesis:DeleteStream"
- "secretsmanager:DeleteSecret"
- "kinesis:DescribeStreamSummary"
- "logs:DeleteLogGroup"
- "logs:DeleteSubscriptionFilter"
- "ecs:DeregisterTaskDefinition"
- "lambda:DeleteFunction"
- "lambda:InvokeFunction"
- "events:RemoveTargets"
- "events:DeleteRule"
- "lambda:RemovePermission"
Resource: "*"
### cloud watch to kinesis role
CloudWatchIAMRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [logs.amazonaws.com]
Action: ['sts:AssumeRole']
Path: /
Policies:
- PolicyName: service-autoscaling
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'kinesis:PutRecord'
Resource: '*'
#####################EFS#####################
EfsFileStorage:
Type: 'AWS::EFS::FileSystem'
Properties:
BackupPolicy:
Status: ENABLED
PerformanceMode: maxIO
Encrypted: false
FileSystemPolicy:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- "elasticfilesystem:ClientMount"
- "elasticfilesystem:ClientWrite"
- "elasticfilesystem:ClientRootAccess"
Principal:
AWS: "*"
MountTargetResource1:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref EfsFileStorage
SubnetId: !Ref PublicSubnetOne
SecurityGroups:
- !Ref EFSServerSecurityGroup
MountTargetResource2:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: !Ref EfsFileStorage
SubnetId: !Ref PublicSubnetTwo
SecurityGroups:
- !Ref EFSServerSecurityGroup
Outputs:
EfsFileStorageId:
Description: 'The connection endpoint for the database.'
Value: !Ref EfsFileStorage
Export:
Name: !Sub ${EnvironmentName}:EfsFileStorageId
ClusterName:
Description: The name of the ECS cluster
Value: !Ref 'ECSCluster'
Export:
Name: !Sub ${EnvironmentName}:ClusterName
AutoscalingRole:
Description: The ARN of the role used for autoscaling
Value: !GetAtt 'AutoscalingRole.Arn'
Export:
Name: !Sub ${EnvironmentName}:AutoscalingRole
ECSRole:
Description: The ARN of the ECS role
Value: !GetAtt 'ECSRole.Arn'
Export:
Name: !Sub ${EnvironmentName}:ECSRole
ECSTaskExecutionRole:
Description: The ARN of the ECS role tsk execution role
Value: !GetAtt 'ECSTaskExecutionRole.Arn'
Export:
Name: !Sub ${EnvironmentName}:ECSTaskExecutionRole
DeleteCFNLambdaExecutionRole:
Description: Lambda execution role for cleaning up cloud formations
Value: !GetAtt 'DeleteCFNLambdaExecutionRole.Arn'
Export:
Name: !Sub ${EnvironmentName}:DeleteCFNLambdaExecutionRole
CloudWatchIAMRole:
Description: The ARN of the CloudWatch role for subscription filter
Value: !GetAtt 'CloudWatchIAMRole.Arn'
Export:
Name: !Sub ${EnvironmentName}:CloudWatchIAMRole
VpcId:
Description: The ID of the VPC that this stack is deployed in
Value: !Ref 'VPC'
Export:
Name: !Sub ${EnvironmentName}:VpcId
PublicSubnetOne:
Description: Public subnet one
Value: !Ref 'PublicSubnetOne'
Export:
Name: !Sub ${EnvironmentName}:PublicSubnetOne
PublicSubnetTwo:
Description: Public subnet two
Value: !Ref 'PublicSubnetTwo'
Export:
Name: !Sub ${EnvironmentName}:PublicSubnetTwo
ContainerSecurityGroup:
Description: A security group used to allow Fargate containers to receive traffic
Value: !Ref 'ContainerSecurityGroup'
Export:
Name: !Sub ${EnvironmentName}:ContainerSecurityGroup

View File

@ -1,221 +0,0 @@
AWSTemplateFormatVersion: 2010-09-09
Description: >-
AWS Fargate cluster that can span public and private subnets. Supports public
facing load balancers, private internal load balancers, and both internal and
external service discovery namespaces.
Parameters:
EnvironmentName:
Type: String
Default: development
Description: 'Your deployment environment: DEV, QA , PROD'
ServiceName:
Type: String
Default: example
Description: A name for the service
ImageUrl:
Type: String
Default: nginx
Description: >-
The url of a docker image that contains the application process that will
handle the traffic for this service
ContainerPort:
Type: Number
Default: 80
Description: What port number the application inside the docker container is binding to
ContainerCpu:
Type: Number
Default: 1024
Description: How much CPU to give the container. 1024 is 1 CPU
ContainerMemory:
Type: Number
Default: 2048
Description: How much memory in megabytes to give the container
BUILDGUID:
Type: String
Default: ''
Command:
Type: String
Default: 'ls'
EntryPoint:
Type: String
Default: '/bin/sh'
WorkingDirectory:
Type: String
Default: '/efsdata/'
Role:
Type: String
Default: ''
Description: >-
(Optional) An IAM role to give the service's containers if the code within
needs to access other AWS resources
EFSMountDirectory:
Type: String
Default: '/efsdata'
# template secrets p1 - input
Mappings:
SubnetConfig:
VPC:
CIDR: 10.0.0.0/16
PublicOne:
CIDR: 10.0.0.0/24
PublicTwo:
CIDR: 10.0.1.0/24
Conditions:
HasCustomRole: !Not
- !Equals
- Ref: Role
- ''
Resources:
LogGroup:
Type: 'AWS::Logs::LogGroup'
Properties:
LogGroupName: !Ref ServiceName
Metadata:
'AWS::CloudFormation::Designer':
id: aece53ae-b82d-4267-bc16-ed964b05db27
SubscriptionFilter:
Type: 'AWS::Logs::SubscriptionFilter'
Properties:
FilterPattern: ''
RoleArn:
'Fn::ImportValue': !Sub '${EnvironmentName}:CloudWatchIAMRole'
LogGroupName: !Ref ServiceName
DestinationArn:
'Fn::GetAtt':
- KinesisStream
- Arn
Metadata:
'AWS::CloudFormation::Designer':
id: 7f809e91-9e5d-4678-98c1-c5085956c480
DependsOn:
- LogGroup
- KinesisStream
KinesisStream:
Type: 'AWS::Kinesis::Stream'
Properties:
Name: !Ref ServiceName
ShardCount: 1
Metadata:
'AWS::CloudFormation::Designer':
id: c6f18447-b879-4696-8873-f981b2cedd2b
# template secrets p2 - secret
TaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
Family: !Ref ServiceName
Cpu: !Ref ContainerCpu
Memory: !Ref ContainerMemory
NetworkMode: awsvpc
Volumes:
- Name: efs-data
EFSVolumeConfiguration:
FilesystemId:
'Fn::ImportValue': !Sub '${EnvironmentName}:EfsFileStorageId'
TransitEncryption: ENABLED
RequiresCompatibilities:
- FARGATE
ExecutionRoleArn:
'Fn::ImportValue': !Sub '${EnvironmentName}:ECSTaskExecutionRole'
TaskRoleArn:
'Fn::If':
- HasCustomRole
- !Ref Role
- !Ref 'AWS::NoValue'
ContainerDefinitions:
- Name: !Ref ServiceName
Cpu: !Ref ContainerCpu
Memory: !Ref ContainerMemory
Image: !Ref ImageUrl
EntryPoint:
Fn::Split:
- ","
- !Ref EntryPoint
Command:
Fn::Split:
- ","
- !Ref Command
WorkingDirectory: !Ref WorkingDirectory
Environment:
- Name: ALLOW_EMPTY_PASSWORD
Value: 'yes'
# template - env vars
MountPoints:
- SourceVolume: efs-data
ContainerPath: !Ref EFSMountDirectory
ReadOnly: false
Secrets:
# template secrets p3 - container def
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref ServiceName
awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: !Ref ServiceName
Metadata:
'AWS::CloudFormation::Designer':
id: dabb0116-abe0-48a6-a8af-cf9111c879a5
DependsOn:
- LogGroup
Metadata:
'AWS::CloudFormation::Designer':
dabb0116-abe0-48a6-a8af-cf9111c879a5:
size:
width: 60
height: 60
position:
x: 270
'y': 90
z: 1
embeds: []
dependson:
- aece53ae-b82d-4267-bc16-ed964b05db27
c6f18447-b879-4696-8873-f981b2cedd2b:
size:
width: 60
height: 60
position:
x: 270
'y': 210
z: 1
embeds: []
7f809e91-9e5d-4678-98c1-c5085956c480:
size:
width: 60
height: 60
position:
x: 60
'y': 300
z: 1
embeds: []
dependson:
- aece53ae-b82d-4267-bc16-ed964b05db27
- c6f18447-b879-4696-8873-f981b2cedd2b
aece53ae-b82d-4267-bc16-ed964b05db27:
size:
width: 150
height: 150
position:
x: 60
'y': 90
z: 1
embeds: []
4d2da56c-3643-46b8-aaee-e46e19f95fcc:
source:
id: 7f809e91-9e5d-4678-98c1-c5085956c480
target:
id: aece53ae-b82d-4267-bc16-ed964b05db27
z: 11
14eb957b-f094-4653-93c4-77b2f851953c:
source:
id: 7f809e91-9e5d-4678-98c1-c5085956c480
target:
id: c6f18447-b879-4696-8873-f981b2cedd2b
z: 12
85c57444-e5bb-4230-bc85-e545cd4558f6:
source:
id: dabb0116-abe0-48a6-a8af-cf9111c879a5
target:
id: aece53ae-b82d-4267-bc16-ed964b05db27
z: 13

BIN
dist/index.js generated vendored

Binary file not shown.

BIN
dist/index.js.map generated vendored

Binary file not shown.

View File

@ -1,3 +1,3 @@
export class CloudRunnerStatics { export class CloudRunnerStatics {
public static readonly logPrefix = `Cloud-Runner-System`; public static readonly logPrefix = `Cloud-Runner`;
} }

View File

@ -1,8 +1,7 @@
import CloudRunnerLogger from '../../services/cloud-runner-logger'; import CloudRunnerLogger from '../../services/cloud-runner-logger';
import * as core from '@actions/core'; import * as core from '@actions/core';
import * as SDK from 'aws-sdk'; import * as SDK from 'aws-sdk';
import * as fs from 'fs'; import { BaseStackFormation } from './cloud-formations/base-stack-formation';
import path from 'path';
const crypto = require('crypto'); const crypto = require('crypto');
export class AWSBaseStack { export class AWSBaseStack {
@ -14,7 +13,7 @@ export class AWSBaseStack {
async setupBaseStack(CF: SDK.CloudFormation) { async setupBaseStack(CF: SDK.CloudFormation) {
const baseStackName = this.baseStackName; const baseStackName = this.baseStackName;
const baseStack = fs.readFileSync(path.join(__dirname, 'cloud-formations', 'base-setup.yml'), 'utf8'); const baseStack = BaseStackFormation.formation;
// Cloud Formation Input // Cloud Formation Input
const describeStackInput: SDK.CloudFormation.DescribeStacksInput = { const describeStackInput: SDK.CloudFormation.DescribeStacksInput = {

View File

@ -1,4 +1,4 @@
import * as fs from 'fs'; import { TaskDefinitionFormation } from './cloud-formations/task-definition-formation';
export class AWSCloudFormationTemplates { export class AWSCloudFormationTemplates {
public static getParameterTemplate(p1) { public static getParameterTemplate(p1) {
@ -34,6 +34,6 @@ export class AWSCloudFormationTemplates {
} }
public static readTaskCloudFormationTemplate(): string { public static readTaskCloudFormationTemplate(): string {
return fs.readFileSync(`${__dirname}/cloud-formations/task-def-formation.yml`, 'utf8'); return TaskDefinitionFormation.formation;
} }
} }

View File

@ -4,6 +4,7 @@ import CloudRunnerSecret from '../../services/cloud-runner-secret';
import { AWSCloudFormationTemplates } from './aws-cloud-formation-templates'; import { AWSCloudFormationTemplates } from './aws-cloud-formation-templates';
import CloudRunnerLogger from '../../services/cloud-runner-logger'; import CloudRunnerLogger from '../../services/cloud-runner-logger';
import { AWSError } from './aws-error'; import { AWSError } from './aws-error';
import CloudRunner from '../../cloud-runner';
export class AWSJobStack { export class AWSJobStack {
private baseStackName: string; private baseStackName: string;
@ -23,6 +24,20 @@ export class AWSJobStack {
): Promise<CloudRunnerAWSTaskDef> { ): Promise<CloudRunnerAWSTaskDef> {
const taskDefStackName = `${this.baseStackName}-${buildGuid}`; const taskDefStackName = `${this.baseStackName}-${buildGuid}`;
let taskDefCloudFormation = AWSCloudFormationTemplates.readTaskCloudFormationTemplate(); let taskDefCloudFormation = AWSCloudFormationTemplates.readTaskCloudFormationTemplate();
const cpu = CloudRunner.buildParameters.cloudRunnerCpu || '1024';
const memory = CloudRunner.buildParameters.cloudRunnerMemory || '3072';
taskDefCloudFormation = taskDefCloudFormation.replace(
`ContainerCpu:
Default: 1024`,
`ContainerCpu:
Default: ${Number.parseInt(cpu)}`,
);
taskDefCloudFormation = taskDefCloudFormation.replace(
`ContainerMemory:
Default: 2048`,
`ContainerMemory:
Default: ${Number.parseInt(memory)}`,
);
for (const secret of secrets) { for (const secret of secrets) {
secret.ParameterKey = `${buildGuid.replace(/[^\dA-Za-z]/g, '')}${secret.ParameterKey.replace( secret.ParameterKey = `${buildGuid.replace(/[^\dA-Za-z]/g, '')}${secret.ParameterKey.replace(
/[^\dA-Za-z]/g, /[^\dA-Za-z]/g,
@ -85,7 +100,9 @@ export class AWSJobStack {
}, },
...secretsMappedToCloudFormationParameters, ...secretsMappedToCloudFormationParameters,
]; ];
CloudRunnerLogger.log(
`Starting AWS job with memory: ${CloudRunner.buildParameters.cloudRunnerMemory} cpu: ${CloudRunner.buildParameters.cloudRunnerCpu}`,
);
let previousStackExists = true; let previousStackExists = true;
while (previousStackExists) { while (previousStackExists) {
previousStackExists = false; previousStackExists = false;
@ -101,14 +118,15 @@ export class AWSJobStack {
} }
} }
} }
const createStackInput: SDK.CloudFormation.CreateStackInput = {
try {
await CF.createStack({
StackName: taskDefStackName, StackName: taskDefStackName,
TemplateBody: taskDefCloudFormation, TemplateBody: taskDefCloudFormation,
Capabilities: ['CAPABILITY_IAM'], Capabilities: ['CAPABILITY_IAM'],
Parameters: parameters, Parameters: parameters,
}).promise(); };
try {
await CF.createStack(createStackInput).promise();
CloudRunnerLogger.log('Creating cloud runner job'); CloudRunnerLogger.log('Creating cloud runner job');
await CF.waitFor('stackCreateComplete', { StackName: taskDefStackName }).promise(); await CF.waitFor('stackCreateComplete', { StackName: taskDefStackName }).promise();
} catch (error) { } catch (error) {

View File

@ -58,14 +58,21 @@ class AWSTaskRunner {
CloudRunnerLogger.log( CloudRunnerLogger.log(
`Cloud runner job status is running ${(await AWSTaskRunner.describeTasks(ECS, cluster, taskArn))?.lastStatus}`, `Cloud runner job status is running ${(await AWSTaskRunner.describeTasks(ECS, cluster, taskArn))?.lastStatus}`,
); );
const output = await this.streamLogsUntilTaskStops(ECS, CF, taskDef, cluster, taskArn, streamName); const { output, shouldCleanup } = await this.streamLogsUntilTaskStops(
ECS,
CF,
taskDef,
cluster,
taskArn,
streamName,
);
const taskData = await AWSTaskRunner.describeTasks(ECS, cluster, taskArn); const taskData = await AWSTaskRunner.describeTasks(ECS, cluster, taskArn);
const exitCode = taskData.containers?.[0].exitCode; const exitCode = taskData.containers?.[0].exitCode;
const wasSuccessful = exitCode === 0 || (exitCode === undefined && taskData.lastStatus === 'RUNNING'); const wasSuccessful = exitCode === 0 || (exitCode === undefined && taskData.lastStatus === 'RUNNING');
if (wasSuccessful) { if (wasSuccessful) {
CloudRunnerLogger.log(`Cloud runner job has finished successfully`); CloudRunnerLogger.log(`Cloud runner job has finished successfully`);
return output; return { output, shouldCleanup };
} else { } else {
if (taskData.stoppedReason === 'Essential container in task exited' && exitCode === 1) { if (taskData.stoppedReason === 'Essential container in task exited' && exitCode === 1) {
throw new Error('Container exited with code 1'); throw new Error('Container exited with code 1');
@ -122,22 +129,24 @@ class AWSTaskRunner {
const logBaseUrl = `https://${Input.region}.console.aws.amazon.com/cloudwatch/home?region=${CF.config.region}#logsV2:log-groups/log-group/${taskDef.taskDefStackName}`; const logBaseUrl = `https://${Input.region}.console.aws.amazon.com/cloudwatch/home?region=${CF.config.region}#logsV2:log-groups/log-group/${taskDef.taskDefStackName}`;
CloudRunnerLogger.log(`You can also see the logs at AWS Cloud Watch: ${logBaseUrl}`); CloudRunnerLogger.log(`You can also see the logs at AWS Cloud Watch: ${logBaseUrl}`);
let shouldReadLogs = true; let shouldReadLogs = true;
let shouldCleanup = true;
let timestamp: number = 0; let timestamp: number = 0;
let output = ''; let output = '';
while (shouldReadLogs) { while (shouldReadLogs) {
await new Promise((resolve) => setTimeout(resolve, 1500)); await new Promise((resolve) => setTimeout(resolve, 1500));
const taskData = await AWSTaskRunner.describeTasks(ECS, clusterName, taskArn); const taskData = await AWSTaskRunner.describeTasks(ECS, clusterName, taskArn);
({ timestamp, shouldReadLogs } = AWSTaskRunner.checkStreamingShouldContinue(taskData, timestamp, shouldReadLogs)); ({ timestamp, shouldReadLogs } = AWSTaskRunner.checkStreamingShouldContinue(taskData, timestamp, shouldReadLogs));
({ iterator, shouldReadLogs, output } = await AWSTaskRunner.handleLogStreamIteration( ({ iterator, shouldReadLogs, output, shouldCleanup } = await AWSTaskRunner.handleLogStreamIteration(
kinesis, kinesis,
iterator, iterator,
shouldReadLogs, shouldReadLogs,
taskDef, taskDef,
output, output,
shouldCleanup,
)); ));
} }
return output; return { output, shouldCleanup };
} }
private static async handleLogStreamIteration( private static async handleLogStreamIteration(
@ -146,6 +155,7 @@ class AWSTaskRunner {
shouldReadLogs: boolean, shouldReadLogs: boolean,
taskDef: CloudRunnerAWSTaskDef, taskDef: CloudRunnerAWSTaskDef,
output: string, output: string,
shouldCleanup: boolean,
) { ) {
const records = await kinesis const records = await kinesis
.getRecords({ .getRecords({
@ -153,9 +163,16 @@ class AWSTaskRunner {
}) })
.promise(); .promise();
iterator = records.NextShardIterator || ''; iterator = records.NextShardIterator || '';
({ shouldReadLogs, output } = AWSTaskRunner.logRecords(records, iterator, taskDef, shouldReadLogs, output)); ({ shouldReadLogs, output, shouldCleanup } = AWSTaskRunner.logRecords(
records,
iterator,
taskDef,
shouldReadLogs,
output,
shouldCleanup,
));
return { iterator, shouldReadLogs, output }; return { iterator, shouldReadLogs, output, shouldCleanup };
} }
private static checkStreamingShouldContinue(taskData: AWS.ECS.Task, timestamp: number, shouldReadLogs: boolean) { private static checkStreamingShouldContinue(taskData: AWS.ECS.Task, timestamp: number, shouldReadLogs: boolean) {
@ -183,6 +200,7 @@ class AWSTaskRunner {
taskDef: CloudRunnerAWSTaskDef, taskDef: CloudRunnerAWSTaskDef,
shouldReadLogs: boolean, shouldReadLogs: boolean,
output: string, output: string,
shouldCleanup: boolean,
) { ) {
if (records.Records.length > 0 && iterator) { if (records.Records.length > 0 && iterator) {
for (let index = 0; index < records.Records.length; index++) { for (let index = 0; index < records.Records.length; index++) {
@ -197,11 +215,18 @@ class AWSTaskRunner {
shouldReadLogs = false; shouldReadLogs = false;
} else if (message.includes('Rebuilding Library because the asset database could not be found!')) { } else if (message.includes('Rebuilding Library because the asset database could not be found!')) {
core.warning('LIBRARY NOT FOUND!'); core.warning('LIBRARY NOT FOUND!');
core.setOutput('library-found', 'false');
} else if (message.includes('Build succeeded')) { } else if (message.includes('Build succeeded')) {
core.setOutput('build-result', 'success'); core.setOutput('build-result', 'success');
} else if (message.includes('Build fail')) { } else if (message.includes('Build fail')) {
core.setOutput('build-result', 'failed'); core.setOutput('build-result', 'failed');
core.setFailed('unity build failed');
core.error('BUILD FAILED!'); core.error('BUILD FAILED!');
} else if (message.includes(': Listening for Jobs')) {
core.setOutput('cloud runner stop watching', 'true');
shouldReadLogs = false;
shouldCleanup = false;
core.warning('cloud runner stop watching');
} }
message = `[${CloudRunnerStatics.logPrefix}] ${message}`; message = `[${CloudRunnerStatics.logPrefix}] ${message}`;
if (CloudRunner.buildParameters.cloudRunnerIntegrationTests) { if (CloudRunner.buildParameters.cloudRunnerIntegrationTests) {
@ -213,7 +238,7 @@ class AWSTaskRunner {
} }
} }
return { shouldReadLogs, output }; return { shouldReadLogs, output, shouldCleanup };
} }
private static async getLogStream(kinesis: AWS.Kinesis, kinesisStreamName: string) { private static async getLogStream(kinesis: AWS.Kinesis, kinesisStreamName: string) {

View File

@ -1,7 +1,6 @@
AWSTemplateFormatVersion: '2010-09-09' export class BaseStackFormation {
Description: AWS Fargate cluster that can span public and private subnets. Supports public static readonly formation: string = `AWSTemplateFormatVersion: '2010-09-09'
public facing load balancers, private internal load balancers, and Description: Game-CI base stack
both internal and external service discovery namespaces.
Parameters: Parameters:
EnvironmentName: EnvironmentName:
Type: String Type: String
@ -335,57 +334,58 @@ Outputs:
Description: 'The connection endpoint for the database.' Description: 'The connection endpoint for the database.'
Value: !Ref EfsFileStorage Value: !Ref EfsFileStorage
Export: Export:
Name: !Sub ${EnvironmentName}:EfsFileStorageId Name: !Sub ${'${EnvironmentName}'}:EfsFileStorageId
ClusterName: ClusterName:
Description: The name of the ECS cluster Description: The name of the ECS cluster
Value: !Ref 'ECSCluster' Value: !Ref 'ECSCluster'
Export: Export:
Name: !Sub ${EnvironmentName}:ClusterName Name: !Sub${' ${EnvironmentName}'}:ClusterName
AutoscalingRole: AutoscalingRole:
Description: The ARN of the role used for autoscaling Description: The ARN of the role used for autoscaling
Value: !GetAtt 'AutoscalingRole.Arn' Value: !GetAtt 'AutoscalingRole.Arn'
Export: Export:
Name: !Sub ${EnvironmentName}:AutoscalingRole Name: !Sub ${'${EnvironmentName}'}:AutoscalingRole
ECSRole: ECSRole:
Description: The ARN of the ECS role Description: The ARN of the ECS role
Value: !GetAtt 'ECSRole.Arn' Value: !GetAtt 'ECSRole.Arn'
Export: Export:
Name: !Sub ${EnvironmentName}:ECSRole Name: !Sub ${'${EnvironmentName}'}:ECSRole
ECSTaskExecutionRole: ECSTaskExecutionRole:
Description: The ARN of the ECS role tsk execution role Description: The ARN of the ECS role tsk execution role
Value: !GetAtt 'ECSTaskExecutionRole.Arn' Value: !GetAtt 'ECSTaskExecutionRole.Arn'
Export: Export:
Name: !Sub ${EnvironmentName}:ECSTaskExecutionRole Name: !Sub ${'${EnvironmentName}'}:ECSTaskExecutionRole
DeleteCFNLambdaExecutionRole: DeleteCFNLambdaExecutionRole:
Description: Lambda execution role for cleaning up cloud formations Description: Lambda execution role for cleaning up cloud formations
Value: !GetAtt 'DeleteCFNLambdaExecutionRole.Arn' Value: !GetAtt 'DeleteCFNLambdaExecutionRole.Arn'
Export: Export:
Name: !Sub ${EnvironmentName}:DeleteCFNLambdaExecutionRole Name: !Sub ${'${EnvironmentName}'}:DeleteCFNLambdaExecutionRole
CloudWatchIAMRole: CloudWatchIAMRole:
Description: The ARN of the CloudWatch role for subscription filter Description: The ARN of the CloudWatch role for subscription filter
Value: !GetAtt 'CloudWatchIAMRole.Arn' Value: !GetAtt 'CloudWatchIAMRole.Arn'
Export: Export:
Name: !Sub ${EnvironmentName}:CloudWatchIAMRole Name: !Sub ${'${EnvironmentName}'}:CloudWatchIAMRole
VpcId: VpcId:
Description: The ID of the VPC that this stack is deployed in Description: The ID of the VPC that this stack is deployed in
Value: !Ref 'VPC' Value: !Ref 'VPC'
Export: Export:
Name: !Sub ${EnvironmentName}:VpcId Name: !Sub ${'${EnvironmentName}'}:VpcId
PublicSubnetOne: PublicSubnetOne:
Description: Public subnet one Description: Public subnet one
Value: !Ref 'PublicSubnetOne' Value: !Ref 'PublicSubnetOne'
Export: Export:
Name: !Sub ${EnvironmentName}:PublicSubnetOne Name: !Sub ${'${EnvironmentName}'}:PublicSubnetOne
PublicSubnetTwo: PublicSubnetTwo:
Description: Public subnet two Description: Public subnet two
Value: !Ref 'PublicSubnetTwo' Value: !Ref 'PublicSubnetTwo'
Export: Export:
Name: !Sub ${EnvironmentName}:PublicSubnetTwo Name: !Sub ${'${EnvironmentName}'}:PublicSubnetTwo
ContainerSecurityGroup: ContainerSecurityGroup:
Description: A security group used to allow Fargate containers to receive traffic Description: A security group used to allow Fargate containers to receive traffic
Value: !Ref 'ContainerSecurityGroup' Value: !Ref 'ContainerSecurityGroup'
Export: Export:
Name: !Sub ${EnvironmentName}:ContainerSecurityGroup Name: !Sub ${'${EnvironmentName}'}:ContainerSecurityGroup
`;
}

View File

@ -1,4 +1,5 @@
AWSTemplateFormatVersion: 2010-09-09 export class TaskDefinitionFormation {
public static readonly formation: string = `AWSTemplateFormatVersion: 2010-09-09
Description: >- Description: >-
AWS Fargate cluster that can span public and private subnets. Supports public AWS Fargate cluster that can span public and private subnets. Supports public
facing load balancers, private internal load balancers, and both internal and facing load balancers, private internal load balancers, and both internal and
@ -23,12 +24,12 @@ Parameters:
Default: 80 Default: 80
Description: What port number the application inside the docker container is binding to Description: What port number the application inside the docker container is binding to
ContainerCpu: ContainerCpu:
Type: Number
Default: 1024 Default: 1024
Type: Number
Description: How much CPU to give the container. 1024 is 1 CPU Description: How much CPU to give the container. 1024 is 1 CPU
ContainerMemory: ContainerMemory:
Type: Number
Default: 2048 Default: 2048
Type: Number
Description: How much memory in megabytes to give the container Description: How much memory in megabytes to give the container
BUILDGUID: BUILDGUID:
Type: String Type: String
@ -78,7 +79,7 @@ Resources:
Properties: Properties:
FilterPattern: '' FilterPattern: ''
RoleArn: RoleArn:
'Fn::ImportValue': !Sub '${EnvironmentName}:CloudWatchIAMRole' 'Fn::ImportValue': !Sub '${'${EnvironmentName}'}:CloudWatchIAMRole'
LogGroupName: !Ref ServiceName LogGroupName: !Ref ServiceName
DestinationArn: DestinationArn:
'Fn::GetAtt': 'Fn::GetAtt':
@ -98,9 +99,7 @@ Resources:
Metadata: Metadata:
'AWS::CloudFormation::Designer': 'AWS::CloudFormation::Designer':
id: c6f18447-b879-4696-8873-f981b2cedd2b id: c6f18447-b879-4696-8873-f981b2cedd2b
# template secrets p2 - secret # template secrets p2 - secret
TaskDefinition: TaskDefinition:
Type: 'AWS::ECS::TaskDefinition' Type: 'AWS::ECS::TaskDefinition'
Properties: Properties:
@ -112,12 +111,12 @@ Resources:
- Name: efs-data - Name: efs-data
EFSVolumeConfiguration: EFSVolumeConfiguration:
FilesystemId: FilesystemId:
'Fn::ImportValue': !Sub '${EnvironmentName}:EfsFileStorageId' 'Fn::ImportValue': !Sub '${'${EnvironmentName}'}:EfsFileStorageId'
TransitEncryption: ENABLED TransitEncryption: ENABLED
RequiresCompatibilities: RequiresCompatibilities:
- FARGATE - FARGATE
ExecutionRoleArn: ExecutionRoleArn:
'Fn::ImportValue': !Sub '${EnvironmentName}:ECSTaskExecutionRole' 'Fn::ImportValue': !Sub '${'${EnvironmentName}'}:ECSTaskExecutionRole'
TaskRoleArn: TaskRoleArn:
'Fn::If': 'Fn::If':
- HasCustomRole - HasCustomRole
@ -153,69 +152,7 @@ Resources:
awslogs-group: !Ref ServiceName awslogs-group: !Ref ServiceName
awslogs-region: !Ref 'AWS::Region' awslogs-region: !Ref 'AWS::Region'
awslogs-stream-prefix: !Ref ServiceName awslogs-stream-prefix: !Ref ServiceName
Metadata:
'AWS::CloudFormation::Designer':
id: dabb0116-abe0-48a6-a8af-cf9111c879a5
DependsOn: DependsOn:
- LogGroup - LogGroup
Metadata: `;
'AWS::CloudFormation::Designer': }
dabb0116-abe0-48a6-a8af-cf9111c879a5:
size:
width: 60
height: 60
position:
x: 270
'y': 90
z: 1
embeds: []
dependson:
- aece53ae-b82d-4267-bc16-ed964b05db27
c6f18447-b879-4696-8873-f981b2cedd2b:
size:
width: 60
height: 60
position:
x: 270
'y': 210
z: 1
embeds: []
7f809e91-9e5d-4678-98c1-c5085956c480:
size:
width: 60
height: 60
position:
x: 60
'y': 300
z: 1
embeds: []
dependson:
- aece53ae-b82d-4267-bc16-ed964b05db27
- c6f18447-b879-4696-8873-f981b2cedd2b
aece53ae-b82d-4267-bc16-ed964b05db27:
size:
width: 150
height: 150
position:
x: 60
'y': 90
z: 1
embeds: []
4d2da56c-3643-46b8-aaee-e46e19f95fcc:
source:
id: 7f809e91-9e5d-4678-98c1-c5085956c480
target:
id: aece53ae-b82d-4267-bc16-ed964b05db27
z: 11
14eb957b-f094-4653-93c4-77b2f851953c:
source:
id: 7f809e91-9e5d-4678-98c1-c5085956c480
target:
id: c6f18447-b879-4696-8873-f981b2cedd2b
z: 12
85c57444-e5bb-4230-bc85-e545cd4558f6:
source:
id: dabb0116-abe0-48a6-a8af-cf9111c879a5
target:
id: aece53ae-b82d-4267-bc16-ed964b05db27
z: 13

View File

@ -4,27 +4,89 @@ import Input from '../../../../input';
import CloudRunnerLogger from '../../../services/cloud-runner-logger'; import CloudRunnerLogger from '../../../services/cloud-runner-logger';
export class AwsCliCommands { export class AwsCliCommands {
@CliFunction(`aws-garbage-collect`, `garbage collect aws`) @CliFunction(`aws-list-stacks`, `List stacks`)
static async garbageCollectAws() { static async awsListStacks(perResultCallback: any) {
process.env.AWS_REGION = Input.region; process.env.AWS_REGION = Input.region;
CloudRunnerLogger.log(`Cloud Formation stacks`);
const CF = new AWS.CloudFormation(); const CF = new AWS.CloudFormation();
const stacks = const stacks =
(await CF.listStacks().promise()).StackSummaries?.filter((_x) => _x.StackStatus !== 'DELETE_COMPLETE') || []; (await CF.listStacks().promise()).StackSummaries?.filter((_x) => _x.StackStatus !== 'DELETE_COMPLETE') || [];
CloudRunnerLogger.log(`DescribeStacksRequest ${stacks.length}`);
for (const element of stacks) { for (const element of stacks) {
CloudRunnerLogger.log(JSON.stringify(element, undefined, 4)); CloudRunnerLogger.log(JSON.stringify(element, undefined, 4));
CloudRunnerLogger.log(`${element.StackName}`);
if (perResultCallback) await perResultCallback(element);
} }
CloudRunnerLogger.log(`ECS Clusters`);
const ecs = new AWS.ECS();
const clusters = (await ecs.listClusters().promise()).clusterArns || [];
if (stacks === undefined) { if (stacks === undefined) {
return; return;
} }
}
@CliFunction(`aws-list-tasks`, `List tasks`)
static async awsListTasks(perResultCallback: any) {
process.env.AWS_REGION = Input.region;
CloudRunnerLogger.log(`ECS Clusters`);
const ecs = new AWS.ECS();
const clusters = (await ecs.listClusters().promise()).clusterArns || [];
for (const element of clusters) { for (const element of clusters) {
const input: AWS.ECS.ListTasksRequest = { const input: AWS.ECS.ListTasksRequest = {
cluster: element, cluster: element,
}; };
CloudRunnerLogger.log(JSON.stringify(await ecs.listTasks(input).promise(), undefined, 4)); const list = (await ecs.listTasks(input).promise()).taskArns || [];
if (list.length > 0) {
const describeInput: AWS.ECS.DescribeTasksRequest = { tasks: list, cluster: element };
const describeList = (await ecs.describeTasks(describeInput).promise()).tasks || [];
if (describeList === []) {
continue;
} }
CloudRunnerLogger.log(`DescribeTasksRequest ${describeList.length}`);
for (const taskElement of describeList) {
if (taskElement === undefined) {
continue;
}
taskElement.overrides = {};
taskElement.attachments = [];
CloudRunnerLogger.log(JSON.stringify(taskElement, undefined, 4));
if (taskElement.createdAt === undefined) {
CloudRunnerLogger.log(`Skipping ${taskElement.taskDefinitionArn} no createdAt date`);
continue;
}
if (perResultCallback) await perResultCallback(taskElement, element);
}
}
}
}
@CliFunction(`aws-garbage-collect`, `garbage collect aws`)
static async garbageCollectAws() {
await AwsCliCommands.cleanup(false);
}
@CliFunction(`aws-garbage-collect-all`, `garbage collect aws`)
static async garbageCollectAwsAll() {
await AwsCliCommands.cleanup(true);
}
@CliFunction(`aws-garbage-collect-all-1d-older`, `garbage collect aws`)
static async garbageCollectAwsAllOlderThanOneDay() {
await AwsCliCommands.cleanup(true);
}
private static async cleanup(deleteResources = false) {
process.env.AWS_REGION = Input.region;
const CF = new AWS.CloudFormation();
const ecs = new AWS.ECS();
await AwsCliCommands.awsListStacks(async (element) => {
if (deleteResources) {
if (element.StackName === 'game-ci' || element.TemplateDescription === 'Game-CI base stack') {
CloudRunnerLogger.log(`Skipping ${element.StackName} ignore list`);
return;
}
const deleteStackInput: AWS.CloudFormation.DeleteStackInput = { StackName: element.StackName };
await CF.deleteStack(deleteStackInput).promise();
}
});
await AwsCliCommands.awsListTasks(async (taskElement, element) => {
if (deleteResources) {
await ecs.stopTask({ task: taskElement.taskArn || '', cluster: element }).promise();
}
});
} }
} }

View File

@ -66,21 +66,24 @@ class AWSBuildEnvironment implements ProviderInterface {
); );
let postRunTaskTimeMs; let postRunTaskTimeMs;
let output = '';
try { try {
const postSetupStacksTimeMs = Date.now(); const postSetupStacksTimeMs = Date.now();
CloudRunnerLogger.log(`Setup job time: ${Math.floor((postSetupStacksTimeMs - startTimeMs) / 1000)}s`); CloudRunnerLogger.log(`Setup job time: ${Math.floor((postSetupStacksTimeMs - startTimeMs) / 1000)}s`);
output = await AWSTaskRunner.runTask(taskDef, ECS, CF, environment, buildGuid, commands); const { output, shouldCleanup } = await AWSTaskRunner.runTask(taskDef, ECS, CF, environment, buildGuid, commands);
postRunTaskTimeMs = Date.now(); postRunTaskTimeMs = Date.now();
CloudRunnerLogger.log(`Run job time: ${Math.floor((postRunTaskTimeMs - postSetupStacksTimeMs) / 1000)}s`); CloudRunnerLogger.log(`Run job time: ${Math.floor((postRunTaskTimeMs - postSetupStacksTimeMs) / 1000)}s`);
} finally { if (shouldCleanup) {
await this.cleanupResources(CF, taskDef); await this.cleanupResources(CF, taskDef);
}
const postCleanupTimeMs = Date.now(); const postCleanupTimeMs = Date.now();
if (postRunTaskTimeMs !== undefined) if (postRunTaskTimeMs !== undefined)
CloudRunnerLogger.log(`Cleanup job time: ${Math.floor((postCleanupTimeMs - postRunTaskTimeMs) / 1000)}s`); CloudRunnerLogger.log(`Cleanup job time: ${Math.floor((postCleanupTimeMs - postRunTaskTimeMs) / 1000)}s`);
}
return output; return output;
} catch (error) {
await this.cleanupResources(CF, taskDef);
throw error;
}
} }
async cleanupResources(CF: SDK.CloudFormation, taskDef: CloudRunnerAWSTaskDef) { async cleanupResources(CF: SDK.CloudFormation, taskDef: CloudRunnerAWSTaskDef) {

View File

@ -108,8 +108,8 @@ class KubernetesJobSpecFactory {
workingDir: `${workingDirectory}`, workingDir: `${workingDirectory}`,
resources: { resources: {
requests: { requests: {
memory: buildParameters.cloudRunnerMemory, memory: buildParameters.cloudRunnerMemory || '750M',
cpu: buildParameters.cloudRunnerCpu, cpu: buildParameters.cloudRunnerCpu || '1',
}, },
}, },
env: [ env: [

View File

@ -70,17 +70,17 @@ export class Caching {
return typeof arguments_[number] != 'undefined' ? arguments_[number] : match; return typeof arguments_[number] != 'undefined' ? arguments_[number] : match;
}); });
}; };
await CloudRunnerSystem.Run(`zip -q -r ${cacheArtifactName}.zip ${path.basename(sourceFolder)}`); await CloudRunnerSystem.Run(`tar -cf ${cacheArtifactName}.tar ${path.basename(sourceFolder)}`);
assert(await fileExists(`${cacheArtifactName}.zip`), 'cache zip exists'); assert(await fileExists(`${cacheArtifactName}.tar`), 'cache archive exists');
assert(await fileExists(path.basename(sourceFolder)), 'source folder exists'); assert(await fileExists(path.basename(sourceFolder)), 'source folder exists');
if (CloudRunner.buildParameters.cachePushOverrideCommand) { if (CloudRunner.buildParameters.cachePushOverrideCommand) {
await CloudRunnerSystem.Run(formatFunction(CloudRunner.buildParameters.cachePushOverrideCommand)); await CloudRunnerSystem.Run(formatFunction(CloudRunner.buildParameters.cachePushOverrideCommand));
} }
await CloudRunnerSystem.Run(`mv ${cacheArtifactName}.zip ${cacheFolder}`); await CloudRunnerSystem.Run(`mv ${cacheArtifactName}.tar ${cacheFolder}`);
RemoteClientLogger.log(`moved ${cacheArtifactName}.zip to ${cacheFolder}`); RemoteClientLogger.log(`moved cache entry ${cacheArtifactName} to ${cacheFolder}`);
assert( assert(
await fileExists(`${path.join(cacheFolder, cacheArtifactName)}.zip`), await fileExists(`${path.join(cacheFolder, cacheArtifactName)}.tar`),
'cache zip exists inside cache folder', 'cache archive exists inside cache folder',
); );
} catch (error) { } catch (error) {
process.chdir(`${startPath}`); process.chdir(`${startPath}`);
@ -101,14 +101,14 @@ export class Caching {
await fs.promises.mkdir(destinationFolder); await fs.promises.mkdir(destinationFolder);
} }
const latestInBranch = await (await CloudRunnerSystem.Run(`ls -t "${cacheFolder}" | grep .zip$ | head -1`)) const latestInBranch = await (await CloudRunnerSystem.Run(`ls -t "${cacheFolder}" | grep .tar$ | head -1`))
.replace(/\n/g, ``) .replace(/\n/g, ``)
.replace('.zip', ''); .replace('.tar', '');
process.chdir(cacheFolder); process.chdir(cacheFolder);
const cacheSelection = const cacheSelection =
cacheArtifactName !== `` && (await fileExists(`${cacheArtifactName}.zip`)) ? cacheArtifactName : latestInBranch; cacheArtifactName !== `` && (await fileExists(`${cacheArtifactName}.tar`)) ? cacheArtifactName : latestInBranch;
await CloudRunnerLogger.log(`cache key ${cacheArtifactName} selection ${cacheSelection}`); await CloudRunnerLogger.log(`cache key ${cacheArtifactName} selection ${cacheSelection}`);
// eslint-disable-next-line func-style // eslint-disable-next-line func-style
@ -127,12 +127,12 @@ export class Caching {
await CloudRunnerSystem.Run(formatFunction(CloudRunner.buildParameters.cachePullOverrideCommand)); await CloudRunnerSystem.Run(formatFunction(CloudRunner.buildParameters.cachePullOverrideCommand));
} }
if (await fileExists(`${cacheSelection}.zip`)) { if (await fileExists(`${cacheSelection}.tar`)) {
const resultsFolder = `results${CloudRunner.buildParameters.buildGuid}`; const resultsFolder = `results${CloudRunner.buildParameters.buildGuid}`;
await CloudRunnerSystem.Run(`mkdir -p ${resultsFolder}`); await CloudRunnerSystem.Run(`mkdir -p ${resultsFolder}`);
RemoteClientLogger.log(`cache item exists ${cacheFolder}/${cacheSelection}.zip`); RemoteClientLogger.log(`cache item exists ${cacheFolder}/${cacheSelection}.tar`);
const fullResultsFolder = path.join(cacheFolder, resultsFolder); const fullResultsFolder = path.join(cacheFolder, resultsFolder);
await CloudRunnerSystem.Run(`unzip -q ${cacheSelection}.zip -d ${path.basename(resultsFolder)}`); await CloudRunnerSystem.Run(`tar -xf ${cacheSelection}.tar -C ${fullResultsFolder}`);
RemoteClientLogger.log(`cache item extracted to ${fullResultsFolder}`); RemoteClientLogger.log(`cache item extracted to ${fullResultsFolder}`);
assert(await fileExists(fullResultsFolder), `cache extraction results folder exists`); assert(await fileExists(fullResultsFolder), `cache extraction results folder exists`);
const destinationParentFolder = path.resolve(destinationFolder, '..'); const destinationParentFolder = path.resolve(destinationFolder, '..');
@ -143,7 +143,6 @@ export class Caching {
await CloudRunnerSystem.Run( await CloudRunnerSystem.Run(
`mv "${path.join(fullResultsFolder, path.basename(destinationFolder))}" "${destinationParentFolder}"`, `mv "${path.join(fullResultsFolder, path.basename(destinationFolder))}" "${destinationParentFolder}"`,
); );
await CloudRunnerSystem.Run(`du -sh ${path.join(destinationParentFolder, path.basename(destinationFolder))}`);
const contents = await fs.promises.readdir( const contents = await fs.promises.readdir(
path.join(destinationParentFolder, path.basename(destinationFolder)), path.join(destinationParentFolder, path.basename(destinationFolder)),
); );
@ -153,7 +152,7 @@ export class Caching {
} else { } else {
RemoteClientLogger.logWarning(`cache item ${cacheArtifactName} doesn't exist ${destinationFolder}`); RemoteClientLogger.logWarning(`cache item ${cacheArtifactName} doesn't exist ${destinationFolder}`);
if (cacheSelection !== ``) { if (cacheSelection !== ``) {
RemoteClientLogger.logWarning(`cache item ${cacheArtifactName}.zip doesn't exist ${destinationFolder}`); RemoteClientLogger.logWarning(`cache item ${cacheArtifactName}.tar doesn't exist ${destinationFolder}`);
throw new Error(`Failed to get cache item, but cache hit was found: ${cacheSelection}`); throw new Error(`Failed to get cache item, but cache hit was found: ${cacheSelection}`);
} }
} }

View File

@ -45,9 +45,11 @@ export class RemoteClient {
} }
private static async sizeOfFolder(message: string, folder: string) { private static async sizeOfFolder(message: string, folder: string) {
if (CloudRunner.buildParameters.cloudRunnerIntegrationTests) {
CloudRunnerLogger.log(`Size of ${message}`); CloudRunnerLogger.log(`Size of ${message}`);
await CloudRunnerSystem.Run(`du -sh ${folder}`); await CloudRunnerSystem.Run(`du -sh ${folder}`);
} }
}
private static async cloneRepoWithoutLFSFiles() { private static async cloneRepoWithoutLFSFiles() {
try { try {

View File

@ -71,7 +71,7 @@ export class BuildAutomationWorkflow implements WorkflowInterface {
const builderPath = path.join(CloudRunnerFolders.builderPathAbsolute, 'dist', `index.js`).replace(/\\/g, `/`); const builderPath = path.join(CloudRunnerFolders.builderPathAbsolute, 'dist', `index.js`).replace(/\\/g, `/`);
return `apt-get update > /dev/null return `apt-get update > /dev/null
apt-get install -y zip tree npm git-lfs jq unzip git > /dev/null apt-get install -y tar tree npm git-lfs jq git > /dev/null
npm install -g n > /dev/null npm install -g n > /dev/null
n stable > /dev/null n stable > /dev/null
${setupHooks.filter((x) => x.hook.includes(`before`)).map((x) => x.commands) || ' '} ${setupHooks.filter((x) => x.hook.includes(`before`)).map((x) => x.commands) || ' '}

View File

@ -1,7 +1,12 @@
import { CloudRunnerSystem } from '../cloud-runner/services/cloud-runner-system'; import { CloudRunnerSystem } from '../cloud-runner/services/cloud-runner-system';
import Input from '../input';
export class GenericInputReader { export class GenericInputReader {
public static async Run(command) { public static async Run(command) {
if (Input.cloudRunnerCluster === 'local') {
return '';
}
return await CloudRunnerSystem.Run(command, false, true); return await CloudRunnerSystem.Run(command, false, true);
} }
} }

View File

@ -2,9 +2,13 @@ import { assert } from 'console';
import fs from 'fs'; import fs from 'fs';
import { CloudRunnerSystem } from '../cloud-runner/services/cloud-runner-system'; import { CloudRunnerSystem } from '../cloud-runner/services/cloud-runner-system';
import CloudRunnerLogger from '../cloud-runner/services/cloud-runner-logger'; import CloudRunnerLogger from '../cloud-runner/services/cloud-runner-logger';
import Input from '../input';
export class GitRepoReader { export class GitRepoReader {
public static async GetRemote() { public static async GetRemote() {
if (Input.cloudRunnerCluster === 'local') {
return '';
}
assert(fs.existsSync(`.git`)); assert(fs.existsSync(`.git`));
const value = (await CloudRunnerSystem.Run(`git remote -v`, false, true)).replace(/ /g, ``); const value = (await CloudRunnerSystem.Run(`git remote -v`, false, true)).replace(/ /g, ``);
CloudRunnerLogger.log(`value ${value}`); CloudRunnerLogger.log(`value ${value}`);
@ -14,6 +18,9 @@ export class GitRepoReader {
} }
public static async GetBranch() { public static async GetBranch() {
if (Input.cloudRunnerCluster === 'local') {
return '';
}
assert(fs.existsSync(`.git`)); assert(fs.existsSync(`.git`));
return (await CloudRunnerSystem.Run(`git branch --show-current`, false, true)) return (await CloudRunnerSystem.Run(`git branch --show-current`, false, true))

View File

@ -1,8 +1,12 @@
import { CloudRunnerSystem } from '../cloud-runner/services/cloud-runner-system'; import { CloudRunnerSystem } from '../cloud-runner/services/cloud-runner-system';
import * as core from '@actions/core'; import * as core from '@actions/core';
import Input from '../input';
export class GithubCliReader { export class GithubCliReader {
static async GetGitHubAuthToken() { static async GetGitHubAuthToken() {
if (Input.cloudRunnerCluster === 'local') {
return '';
}
try { try {
const authStatus = await CloudRunnerSystem.Run(`gh auth status`, true, true); const authStatus = await CloudRunnerSystem.Run(`gh auth status`, true, true);
if (authStatus.includes('You are not logged') || authStatus === '') { if (authStatus.includes('You are not logged') || authStatus === '') {

View File

@ -1,8 +1,12 @@
import path from 'path'; import path from 'path';
import fs from 'fs'; import fs from 'fs';
import YAML from 'yaml'; import YAML from 'yaml';
import Input from '../input';
export function ReadLicense() { export function ReadLicense() {
if (Input.cloudRunnerCluster === 'local') {
return '';
}
const pipelineFile = path.join(__dirname, `.github`, `workflows`, `cloud-runner-k8s-pipeline.yml`); const pipelineFile = path.join(__dirname, `.github`, `workflows`, `cloud-runner-k8s-pipeline.yml`);
return fs.existsSync(pipelineFile) ? YAML.parse(fs.readFileSync(pipelineFile, 'utf8')).env.UNITY_LICENSE : ''; return fs.existsSync(pipelineFile) ? YAML.parse(fs.readFileSync(pipelineFile, 'utf8')).env.UNITY_LICENSE : '';

View File

@ -234,11 +234,11 @@ class Input {
} }
static get cloudRunnerCpu() { static get cloudRunnerCpu() {
return Input.getInput('cloudRunnerCpu') || '1.0'; return Input.getInput('cloudRunnerCpu');
} }
static get cloudRunnerMemory() { static get cloudRunnerMemory() {
return Input.getInput('cloudRunnerMemory') || '750M'; return Input.getInput('cloudRunnerMemory');
} }
static get kubeConfig() { static get kubeConfig() {