aws is amazing, but it's hard to see the forest for the trees.
aws should:
it should be easy for a lambda to react to:
it should be easy to create:
declare and deploy groups of related aws infrastructure as infrastructure sets:
that contain:
that react to lambda triggers:
a simpler way to declare aws infrastructure that is easy to use and extend.
there are two ways to use it:
go structs and the go api
the primary entrypoints are:
infra-ensure: deploy an infrastructure set.
libaws infra-ensure ./infra.yaml --preview
libaws infra-ensure ./infra.yaml
infra-ls: view infrastructure sets.
libaws infra-ls
infra-ensure --quick: quickly update lambda code.
libaws infra-ensure ./infra.yaml --quick LAMBDA_NAME
infra-rm: remove an infrastructure set.
libaws infra-rm ./infra.yaml --preview
libaws infra-rm ./infra.yaml
infra-ensure
is a positive assertion. it asserts that some named infrastructure exists, and is configured correctly, creating or updating it if needed.
many other entrypoints exist, and can be explored by type. they fall into two categories:
mutate aws state:
>> libaws -h | grep ensure | wc -l
19
>> libaws -h | grep new | wc -l
1
>> libaws -h | grep rm | wc -l
26
view aws state:
>> libaws -h | grep ls | wc -l
33
>> libaws -h | grep describe | wc -l
6
>> libaws -h | grep get | wc -l
16
>> libaws -h | grep scan | wc -l
1
compared to the full aws api, systems declared as infrastructure sets:
are easier to use.
are harder to screw up.
are almost always enough, and easy to extend.
are more fun.
if you want to use the full aws api, there are many great tools:
go install github.com/nathants/libaws@latest
export PATH=$PATH:$(go env GOPATH)/bin
go get github.com/nathants/libaws@latest
>> cd examples/simple/go/s3 && tree
.
├── infra.yaml
└── main.go
name: test-infraset-${uid}
s3:
test-bucket-${uid}:
attr:
- acl=private
lambda:
test-lambda-${uid}:
entrypoint: main.go
attr:
- concurrency=0
- memory=128
- timeout=60
policy:
- AWSLambdaBasicExecutionRole
trigger:
- type: s3
attr:
- test-bucket-${uid}
package main
import (
"context"
"fmt"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
)
func handleRequest(_ context.Context, e events.S3Event) (events.APIGatewayProxyResponse, error) {
for _, record := range e.Records {
fmt.Println(record.S3.Object.Key)
}
return events.APIGatewayProxyResponse{StatusCode: 200}, nil
}
func main() {
lambda.Start(handleRequest)
}
depth based colors by yaml
>> libaws -h | grep ensure | head
codecommit-ensure - ensure a codecommit repository
dynamodb-ensure - ensure a dynamodb table
ec2-ensure-keypair - ensure a keypair
ec2-ensure-sg - ensure a sg
ecr-ensure - ensure ecr image
iam-ensure-ec2-spot-roles - ensure iam ec2 spot roles that are needed to use ec2 spot
iam-ensure-instance-profile - ensure an iam instance-profile
iam-ensure-role - ensure an iam role
iam-ensure-user-api - ensure an iam user with api key
iam-ensure-user-login - ensure an iam user with login
>> libaws s3-ensure -h
ensure a s3 bucket
example:
- libaws s3-ensure test-bucket acl=public versioning=true
optional attrs:
- acl=VALUE (values = public | private, default = private)
- versioning=VALUE (values = true | false, default = false)
- metrics=VALUE (values = true | false, default = true)
- cors=VALUE (values = true | false, default = false)
- ttldays=VALUE (values = 0 | n, default = 0)
setting 'cors=true' uses '*' for allowed origins. to specify one or more explicit origins, do this instead:
- corsorigin=http://localhost:8080
- corsorigin=https://example.com
Usage: s3-ensure [--preview] NAME [ATTR [ATTR ...]]
Positional arguments:
NAME
ATTR
Options:
--preview, -p
--help, -h display this help and exit
package main
import (
"github.com/nathants/libaws/lib"
)
func main() {
lib. (TAB =>)
|--------------------------------------------------------------------------------|
|f AcmClient func() *acm.ACM (Function) |
|f AcmClientExplicit func(accessKeyID string, accessKeySecret string, region stri|
|f AcmListCertificates func(ctx context.Context) ([]*acm.CertificateSummary, erro|
|f Api func(ctx context.Context, name string) (*apigatewayv2.Api, error) (Functio|
|f ApiClient func() *apigatewayv2.ApiGatewayV2 (Function) |
|f ApiClientExplicit func(accessKeyID string, accessKeySecret string, region stri|
|f ApiList func(ctx context.Context) ([]*apigatewayv2.Api, error) (Function) |
|f ApiListDomains func(ctx context.Context) ([]*apigatewayv2.DomainName, error) (|
|f ApiUrl func(ctx context.Context, name string) (string, error) (Function) |
|f ApiUrlDomain func(ctx context.Context, name string) (string, error) (Function)|
|... |
|--------------------------------------------------------------------------------|
}
an infrastructure set is defined by yaml or go struct and contains:
use infra-ensure to deploy an infrastructure set.
libaws infra-ensure ./infra.yaml --preview
libaws infra-ensure ./infra.yaml
use infra-ls to view infrastructure sets.
libaws infra-ls
use infra-ensure --quick LAMBDA_NAME to quickly update lambda code.
libaws infra-ensure ./infra.yaml --quick LAMBDA_NAME
use infra-rm to remove an infrastructure set.
libaws infra-rm ./infra.yaml --preview
libaws infra-rm ./infra.yaml
there is no implicit coordination.
there are only two state locations:
aws infrastructure is uniquely identified by name.
mutative operations manipulate aws state.
--preview
. no output means no changes.ensure
are mutative operations that create or update infrastructure.
rm
are mutative operations that delete infrastructure.
ls
, get
, scan
, and describe
operations are non-mutative.
multiple infrastructure sets can be deployed into the same account/region.
no attempt is made to avoid vendor lock-in.
ensure
operations are positive assertions. they assert that some named infrastructure exists, and is configured correctly, creating or updating it if needed.
positive assertions CANNOT remove top level infrastructure, but CAN remove configuration from them.
removing a trigger
, policy
, or allow
WILL remove that from the lambda
.
removing policy
, or allow
WILL remove that from the instance-profile
.
removing a security-group
WILL remove that from the vpc
.
removing a rule
WILL remove that from the security-group
.
removing an attr
WILL remove that from a sqs
, s3
, dynamodb
, or lambda
.
removing a keypair
, vpc
, instance-profile
, sqs
, s3
, dynamodb
, or lambda
WON'T remove that from the account/region.
the operator decides IF and WHEN top level infrastructure should be deleted, then uses an rm
operation to do so.
as a convenience, infra-rm
will remove ALL infrastructure CURRENTLY declared in an infra.yaml
.
when using ensure
operations, no output means no changes.
for large infrastructure sets, this can mean a minute or two without output if no changes are needed.
to see a lot of output instead of none, set this environment variable:
export DEBUG=yes
infra-ls
is designed to list aws accounts managed with infra-ensure
. it will not work well in other scenarios.
use an infra.yaml
file to declare an infrastructure set. the schema is as follows:
name: VALUE
lambda:
VALUE:
entrypoint: VALUE
policy: [VALUE ...]
allow: [VALUE ...]
attr: [VALUE ...]
require: [VALUE ...]
env: [VALUE ...]
include: [VALUE ...]
trigger:
- type: VALUE
attr: [VALUE ...]
s3:
VALUE:
attr: [VALUE ...]
dynamodb:
VALUE:
key: [VALUE ...]
attr: [VALUE ...]
sqs:
VALUE:
attr: [VALUE ...]
vpc:
VALUE:
security-group:
VALUE:
rule: [VALUE ...]
keypair:
VALUE:
pubkey-content: VALUE
instance-profile:
VALUE:
allow: [VALUE ...]
policy: [VALUE ...]
anywhere in infra.yaml
you can substitute environment variables from the caller's environment:
s3:
test-bucket-${uid}:
attr:
- versioning=${versioning}
the following variables are defined during deployment, and are useful in allow
declarations:
${API_ID}
the id of the apigateway v2 api created by an api
trigger.
${WEBSOCKET_ID}
the id of the apigateway v2 websocket created by a websocket
trigger.
defines the name of the infrastructure set.
schema:
name: VALUE
example:
name: test-infraset
defines a s3 bucket:
the following attributes can be defined:
acl=VALUE
, values: public | private
, default: private
versioning=VALUE
, values: true | false
, default: false
metrics=VALUE
, values: true | false
, default: true
cors=VALUE
, values: true | false
, default: false
ttldays=VALUE
, values: 0 | n
, default: 0
allow_put=VALUE
, values: $principal.amazonaws.com
setting cors=true
uses *
for allowed origins. to specify one or more explicit origins, do this instead:
corsorigin=http://localhost:8080
corsorigin=https://example.com
schema:
s3:
VALUE:
attr:
- VALUE
example:
s3:
test-bucket:
attr:
- versioning=true
- acl=public
defines a dynamodb table:
specify key as:
NAME:ATTR_TYPE:KEY_TYPE
the following attributes can be defined:
read=VALUE
, provisioned read capacity, default: 0
write=VALUE
, provisioined write capacity, default: 0
on global indices the following attributes can be defined:
projection=VALUE
, provisioned read capacity, default: ALL
read=VALUE
, provisioned read capacity, default: 0
write=VALUE
, provisioined write capacity, default: 0
on local indices the following attributes can be defined:
projection=VALUE
, provisioned read capacity, default: ALL
schema:
dynamodb:
VALUE:
key:
- NAME:ATTR_TYPE:KEY_TYPE
attr:
- VALUE
global-index:
VALUE:
key:
- NAME:ATTR_TYPE:KEY_TYPE
non-key:
- NAME
attr:
- VALUE
local-index:
VALUE:
key:
- NAME:ATTR_TYPE:KEY_TYPE
non-key:
- NAME
attr:
- VALUE
example:
dynamodb:
stream-table:
key:
- userid:s:hash
- timestamp:n:range
attr:
- stream=keys_only
auth-table:
key:
- id:s:hash
attr:
- write=50
- read=150
example global secondary index:
dynamodb:
test-table:
key:
- id:s:hash
global-index:
test-index:
key:
- hometown:s:hash
example local secondary index:
dynamodb:
test-table:
key:
- id:s:hash
local-index:
test-index:
key:
- hometown:s:hash
defines a sqs queue:
the following attributes can be defined:
delay=VALUE
, delay seconds, default: 0
size=VALUE
, maximum message size bytes, default: 262144
retention=VALUE
, message rentention period seconds, default: 345600
wait=VALUE
, receive wait time seconds, default: 0
timeout=VALUE
, visibility timeout seconds, default: 30
schema:
sqs:
VALUE:
attr:
- VALUE
example:
sqs:
test-queue:
attr:
- delay=20
- timeout=300
defines an ec2 keypair.
example:
keypair:
VALUE:
pubkey-content: VALUE
example:
keypair:
test-keypair:
pubkey-content: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICVp11Z99AySWfbLrMBewZluh7cwLlkjifGH5u22RXor
defines a default-like vpc with an internet gateway and public access.
schema:
vpc:
VALUE: {}
example:
vpc:
test-vpc: {}
defines a security group on a vpc
schema:
vpc:
VALUE:
security-group:
VALUE:
rule:
- PROTO:PORT:SOURCE
example:
vpc:
test-vpc:
security-group:
test-sg:
rule:
- tcp:22:0.0.0.0/0
defines an ec2 instance profile.
schema:
instance-profile:
VALUE:
allow:
- SERVICE:ACTION ARN
policy:
- VALUE
example:
instance-profile:
test-profile:
allow:
- s3:* *
policy:
- AWSLambdaBasicExecutionRole
defines a lambda.
schema:
lambda:
VALUE: {}
example:
lambda:
test-lambda: {}
defines the code of the lambda. it is one of:
a python file.
a go file.
an ecr container uri.
schema:
lambda:
VALUE:
entrypoint: VALUE
example:
lambda:
test-lambda:
entrypoint: main.go
defines lambda attributes. the following can be defined:
concurrency
defines the reserved concurrent executions, default: 0
memory
defines lambda ram in megabytes, default: 128
timeout
defines the lambda timeout in seconds, default: 300
logs-ttl-days
defines the ttl days for cloudwatch logs, default: 7
schema:
lambda:
VALUE:
attr:
- KEY=VALUE
example:
lambda:
test-lambda:
attr:
- concurrency=100
- memory=256
- timeout=60
- logs-ttl-days=1
defines policies on the lambda's iam role.
schema:
lambda:
VALUE:
policy:
- VALUE
example:
lambda:
test-lambda:
policy:
- AWSLambdaBasicExecutionRole
defines allows on the lambda's iam role.
schema:
lambda:
VALUE:
allow:
- SERVICE:ACTION ARN
example:
lambda:
test-lambda:
allow:
- s3:* *
- dynamodb:* arn:aws:dynamodb:*:*:table/test-table
defines environment variables on the lambda:
schema:
lambda:
VALUE:
env:
- KEY=VALUE
example:
lambda:
test-lambda:
env:
- kind=production
defines extra content to include in the lambda zip:
this is ignored when entrypoint
is an ecr container uri.
schema:
lambda:
VALUE:
include:
- VALUE
example:
lambda:
test-lambda:
include:
- ./cacerts.crt
- ../frontend/public/*
defines dependencies to install with pip in the virtualenv zip.
this is ignored unless the entrypoint
is a python file.
schema:
lambda:
VALUE:
require:
- VALUE
example:
lambda:
test-lambda:
require:
- fastapi==0.76.0
defines triggers for the lambda:
schema:
lambda:
VALUE:
trigger:
- type: VALUE
attr:
- VALUE
example:
lambda:
test-lambda:
trigger:
- type: dynamodb
attr:
- test-table
defines an ses email receiving trigger.
route53 and ses must already be configured to use this trigger.
dns and bucket attrs are required, prefix is optional.
s3 bucket must allow put from ses.
schema:
lambda:
VALUE:
trigger:
- type: ses
attr:
- VALUE
example:
s3:
my-bucket:
attr:
- allow_put=ses.amazonaws.com
lambda:
test-lambda:
trigger:
- type: ses
attr:
- dns=my-email-domain.com
- bucket=my-bucket
- prefix=emails/
defines an apigateway v2 http api:
add a custom domain with attr: domain=api.example.com
add a custom domain and update route53 with attr: dns=api.example.com
schema:
lambda:
VALUE:
trigger:
- type: api
attr:
- VALUE
example:
lambda:
test-lambda:
trigger:
- type: api
attr:
- dns=api.example.com
defines an apigateway v2 websocket api:
add a custom domain with attr: domain=ws.example.com
add a custom domain and update route53 with attr: dns=ws.example.com
this domain, or its parent domain, must already exist as a hosted zone in route53-ls.
this domain, or its parent domain, must already have an acm certificate with subdomain wildcard.
schema:
lambda:
VALUE:
trigger:
- type: websocket
attr:
- VALUE
example:
lambda:
test-lambda:
trigger:
- type: websocket
attr:
- dns=ws.example.com
defines an s3 trigger:
the only attribute must be the bucket name.
object creation and deletion invoke the trigger.
schema:
lambda:
VALUE:
trigger:
- type: s3
attr:
- VALUE
example:
lambda:
test-lambda:
trigger:
- type: s3
attr:
- test-bucket
defines a dynamodb trigger:
the first attribute must be the table name.
the following trigger attributes can be defined:
batch=VALUE
, maximum batch size, default: 100
parallel=VALUE
, parallelization factor, default: 1
retry=VALUE
, maximum retry attempts, default: -1
window=VALUE
, maximum batching window in seconds, default: 0
start=VALUE
, starting positionschema:
lambda:
VALUE:
trigger:
- type: dynamodb
attr:
- VALUE
example:
lambda:
test-lambda:
trigger:
- type: dynamodb
attr:
- test-table
- start=trim_horizon
defines a sqs trigger:
the first attribute must be the queue name.
the following trigger attributes can be defined:
batch=VALUE
, maximum batch size, default: 10
window=VALUE
, maximum batching window in seconds, default: 0
schema:
lambda:
VALUE:
trigger:
- type: sqs
attr:
- VALUE
example:
lambda:
test-lambda:
trigger:
- type: sqs
attr:
- test-queue
defines a schedule trigger:
the only attribute must be the schedule expression.
schema:
lambda:
VALUE:
trigger:
- type: schedule
attr:
- VALUE
example:
lambda:
test-lambda:
trigger:
- type: schedule
attr:
- rate(24 hours)
defines an ecr trigger:
successful image actions to any ecr repository will invoke the trigger.
schema:
lambda:
VALUE:
trigger:
- type: ecr
example:
lambda:
test-lambda:
trigger:
- type: ecr
source completions.d/libaws.sh
drop down to the aws go sdk and implement what you need.
extend an existing mutative operation or add a new one.
you will find examples in cmd/ and lib/ that can provide a good place to start.
you can reuse many existing operations like:
alternatively, lift and shift to other infrastructure automation tooling. ls
and describe
operations will give you all the information you need.
run all integration tests aws with tox:
export LIBAWS_TEST_ACCOUNT=$ACCOUNT_NUM
tox
run one integration test aws with tox:
export LIBAWS_TEST_ACCOUNT=$ACCOUNT_NUM
tox -- bash -c 'cd examples/simple/python/api/ && python test.py'