Initial commit
This commit is contained in:
31
label/infra/Pulumi.coodex.yaml
Normal file
31
label/infra/Pulumi.coodex.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
config:
|
||||
lambda_api_gateway:lambda_api_gateway: valvulas_funcao_rekognition_dev
|
||||
aws:region: us-east-1
|
||||
project: Rekognition Valvula Funcao
|
||||
environment: dev
|
||||
vpc:
|
||||
id: vpc-0c83ac3bfb36f79b4
|
||||
api:
|
||||
name: AssistentesProdutosServicosAPI
|
||||
description: API gateway created by pulumi
|
||||
endpoint_type: PRIVATE
|
||||
lambda:
|
||||
entity_extraction:
|
||||
name: assistente-produtos-servicos-dev
|
||||
handler: agent.agent_call
|
||||
timeout: 900
|
||||
runtime: python3.12
|
||||
ecr_repo:
|
||||
name: assistente-produtos-servicos-backend-dev
|
||||
repository_url: 277048801940.dkr.ecr.us-east-1.amazonaws.com/assistente-produtos-servicos-backend-dev
|
||||
api_gateway:
|
||||
name: token-assistente-produtos-servicos-pulumi
|
||||
description: API Key para o stage dev da API Gateway
|
||||
usage_plan_name: APIAIUsagePlan
|
||||
stage_name: dev
|
||||
method: POST
|
||||
api_key_required: true
|
||||
request_model: Empty
|
||||
deployment:
|
||||
stage_name: dev
|
||||
|
||||
13
label/infra/code/Dockerfile
Normal file
13
label/infra/code/Dockerfile
Normal file
@@ -0,0 +1,13 @@
|
||||
FROM public.ecr.aws/lambda/python:3.13
|
||||
|
||||
# Copy requirements.txt
|
||||
COPY requirements.txt ${LAMBDA_TASK_ROOT}
|
||||
|
||||
# Install the specified packages
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
# Copy function code
|
||||
COPY ./ ${LAMBDA_TASK_ROOT}
|
||||
|
||||
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
|
||||
CMD ["lambda_handler.lambda_handler"]
|
||||
93
label/infra/code/README.md
Normal file
93
label/infra/code/README.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# ChatBot
|
||||
|
||||
|
||||
|
||||
## Getting started
|
||||
|
||||
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
|
||||
|
||||
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
|
||||
|
||||
## Add your files
|
||||
|
||||
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
|
||||
- [ ] [Add files using the command line](https://docs.gitlab.com/topics/git/add_files/#add-files-to-a-git-repository) or push an existing Git repository with the following command:
|
||||
|
||||
```
|
||||
cd existing_repo
|
||||
git remote add origin https://gitlab.shared.cloud.dnxbrasil.com.br/dnx-br/clientes/ifsp/chatbot.git
|
||||
git branch -M main
|
||||
git push -uf origin main
|
||||
```
|
||||
|
||||
## Integrate with your tools
|
||||
|
||||
- [ ] [Set up project integrations](https://gitlab.shared.cloud.dnxbrasil.com.br/dnx-br/clientes/ifsp/chatbot/-/settings/integrations)
|
||||
|
||||
## Collaborate with your team
|
||||
|
||||
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
|
||||
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
|
||||
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
|
||||
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
|
||||
- [ ] [Set auto-merge](https://docs.gitlab.com/user/project/merge_requests/auto_merge/)
|
||||
|
||||
## Test and Deploy
|
||||
|
||||
Use the built-in continuous integration in GitLab.
|
||||
|
||||
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/)
|
||||
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
|
||||
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
|
||||
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
|
||||
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
|
||||
|
||||
***
|
||||
|
||||
# Editing this README
|
||||
|
||||
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
|
||||
|
||||
## Suggestions for a good README
|
||||
|
||||
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
|
||||
|
||||
## Name
|
||||
Choose a self-explaining name for your project.
|
||||
|
||||
## Description
|
||||
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
|
||||
|
||||
## Badges
|
||||
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
|
||||
|
||||
## Visuals
|
||||
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
|
||||
|
||||
## Installation
|
||||
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
|
||||
|
||||
## Usage
|
||||
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
|
||||
|
||||
## Support
|
||||
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
|
||||
|
||||
## Roadmap
|
||||
If you have ideas for releases in the future, it is a good idea to list them in the README.
|
||||
|
||||
## Contributing
|
||||
State if you are open to contributions and what your requirements are for accepting them.
|
||||
|
||||
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
|
||||
|
||||
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
|
||||
|
||||
## Authors and acknowledgment
|
||||
Show your appreciation to those who have contributed to the project.
|
||||
|
||||
## License
|
||||
For open source projects, say how it is licensed.
|
||||
|
||||
## Project status
|
||||
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
|
||||
1261
label/infra/code/diagram_processor.py
Normal file
1261
label/infra/code/diagram_processor.py
Normal file
File diff suppressed because it is too large
Load Diff
181
label/infra/code/function_a.py
Normal file
181
label/infra/code/function_a.py
Normal file
@@ -0,0 +1,181 @@
|
||||
import boto3
|
||||
import os
|
||||
import tempfile
|
||||
import json
|
||||
from urllib.parse import urlparse
|
||||
from diagram_processor import DiagramProcessor
|
||||
|
||||
|
||||
def parse_s3_path(s3_path):
|
||||
"""
|
||||
Parse S3 path into bucket and key
|
||||
|
||||
Args:
|
||||
s3_path: S3 path like 's3://bucket-name/path/to/file.pdf'
|
||||
|
||||
Returns:
|
||||
Tuple (bucket, key)
|
||||
"""
|
||||
if not s3_path.startswith('s3://'):
|
||||
raise ValueError(f"Invalid S3 path: {s3_path}. Must start with 's3://'")
|
||||
|
||||
parsed = urlparse(s3_path)
|
||||
bucket = parsed.netloc
|
||||
key = parsed.path.lstrip('/')
|
||||
|
||||
return bucket, key
|
||||
|
||||
|
||||
def download_from_s3(s3_path, local_path):
|
||||
"""
|
||||
Download file from S3
|
||||
|
||||
Args:
|
||||
s3_path: S3 path (s3://bucket/key)
|
||||
local_path: Local file path to save to
|
||||
"""
|
||||
bucket, key = parse_s3_path(s3_path)
|
||||
|
||||
s3_client = boto3.client('s3')
|
||||
print(f"Downloading from S3: {s3_path}")
|
||||
s3_client.download_file(bucket, key, local_path)
|
||||
print(f"Downloaded to: {local_path}")
|
||||
|
||||
|
||||
def execute(s3_path):
|
||||
"""
|
||||
Function A - Process diagram from S3 and return matches only
|
||||
|
||||
Args:
|
||||
s3_path: S3 path to diagram (e.g., 's3://my-bucket/diagrams/diagram.pdf')
|
||||
|
||||
Returns:
|
||||
Dictionary with matches of labels and blocks
|
||||
"""
|
||||
print(f"Function A - Diagram Processing")
|
||||
print(f"Input S3 path: {s3_path}")
|
||||
|
||||
# Create temporary directory
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
# Download diagram from S3
|
||||
bucket, key = parse_s3_path(s3_path)
|
||||
input_file = os.path.join(temp_dir, os.path.basename(key))
|
||||
download_from_s3(s3_path, input_file)
|
||||
|
||||
# Create output directory for processing
|
||||
output_dir = os.path.join(temp_dir, 'output')
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Initialize processor
|
||||
print("\nInitializing DiagramProcessor...")
|
||||
processor = DiagramProcessor(
|
||||
region=os.environ.get('AWS_REGION', 'us-east-1'),
|
||||
custom_labels_arn=os.environ.get('CUSTOM_LABELS_ARN', 'arn:aws:rekognition:us-east-1:173378533286:project/labels-valvula/version/labels-valvula.2025-11-24T15.44.16/1764009856090')
|
||||
)
|
||||
|
||||
# Process diagram
|
||||
print("\nProcessing diagram...")
|
||||
try:
|
||||
results = processor.process_single_diagram(
|
||||
diagram_path=input_file,
|
||||
output_base_dir=output_dir,
|
||||
grid_size=(5, 5),
|
||||
overlap_percent=10,
|
||||
keep_regex_list=[r'\+', r'\+', r'.*[Xx].*', r'\*', r'\\'],
|
||||
min_confidence=80,
|
||||
custom_labels_confidence=60,
|
||||
iou_threshold=0.3,
|
||||
matching_max_distance=200
|
||||
)
|
||||
|
||||
# Extract only the matches
|
||||
matching_results = results['matching_results']
|
||||
|
||||
# Format matches for clean output
|
||||
formatted_matches = []
|
||||
for match in matching_results['matches']:
|
||||
match_type = match.get('match_type', 'vm_label')
|
||||
|
||||
if match_type == 'two_labels':
|
||||
formatted_match = {
|
||||
'object_name': match['object_name'],
|
||||
'object_confidence': round(match['object_confidence'], 2),
|
||||
'match_type': match_type,
|
||||
'text_top': match['text_top'],
|
||||
'text_top_confidence': round(match['text_confidence_top'], 2),
|
||||
'text_bottom': match['text_bottom'],
|
||||
'text_bottom_confidence': round(match['text_confidence_bottom'], 2),
|
||||
'object_bbox': match['object_bbox'],
|
||||
'text_bbox_top': match['text_bbox_top'],
|
||||
'text_bbox_bottom': match['text_bbox_bottom']
|
||||
}
|
||||
else:
|
||||
formatted_match = {
|
||||
'object_name': match['object_name'],
|
||||
'object_confidence': round(match['object_confidence'], 2),
|
||||
'match_type': match_type,
|
||||
'text': match['text'],
|
||||
'text_confidence': round(match['text_confidence'], 2),
|
||||
'distance_pixels': round(match['distance_pixels'], 2),
|
||||
'object_bbox': match['object_bbox'],
|
||||
'text_bbox': match['text_bbox']
|
||||
}
|
||||
|
||||
formatted_matches.append(formatted_match)
|
||||
|
||||
# Format unmatched objects
|
||||
unmatched_objects = [
|
||||
{
|
||||
'name': obj['Name'],
|
||||
'confidence': round(obj['Confidence'], 2),
|
||||
'bbox': obj['global_bbox']
|
||||
}
|
||||
for obj in matching_results['unmatched_objects']
|
||||
]
|
||||
|
||||
# Format unmatched texts
|
||||
unmatched_texts = [
|
||||
{
|
||||
'text': text['text'],
|
||||
'confidence': round(text['confidence'], 2),
|
||||
'bbox': text['global_bbox']
|
||||
}
|
||||
for text in matching_results['unmatched_texts']
|
||||
]
|
||||
|
||||
# Prepare response
|
||||
response = {
|
||||
'status': 'success',
|
||||
'input_s3_path': s3_path,
|
||||
'summary': {
|
||||
'total_matches': len(formatted_matches),
|
||||
'unmatched_objects': len(unmatched_objects),
|
||||
'unmatched_texts': len(unmatched_texts),
|
||||
'matching_rate': f"{matching_results['matching_rate']*100:.1f}%"
|
||||
},
|
||||
'matches': formatted_matches,
|
||||
'unmatched_objects': unmatched_objects,
|
||||
'unmatched_texts': unmatched_texts
|
||||
}
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("PROCESSING COMPLETE")
|
||||
print("="*80)
|
||||
print(f"Total matches: {len(formatted_matches)}")
|
||||
print(f"Matching rate: {matching_results['matching_rate']*100:.1f}%")
|
||||
print(f"Unmatched objects: {len(unmatched_objects)}")
|
||||
print(f"Unmatched texts: {len(unmatched_texts)}")
|
||||
|
||||
return response
|
||||
|
||||
except Exception as e:
|
||||
error_message = f"Error processing diagram: {str(e)}"
|
||||
print(error_message)
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
return {
|
||||
'status': 'error',
|
||||
'error': error_message,
|
||||
'input_s3_path': s3_path
|
||||
}
|
||||
6
label/infra/code/function_b.py
Normal file
6
label/infra/code/function_b.py
Normal file
@@ -0,0 +1,6 @@
|
||||
def execute(text):
|
||||
"""
|
||||
Function B - prints the received text parameter
|
||||
"""
|
||||
print(f"Function B received: {text}")
|
||||
return f"Function B processed: {text}"
|
||||
111
label/infra/code/lambda_handler.py
Normal file
111
label/infra/code/lambda_handler.py
Normal file
@@ -0,0 +1,111 @@
|
||||
import json
|
||||
import function_a
|
||||
import function_b
|
||||
|
||||
|
||||
def lambda_handler(event, context):
|
||||
"""
|
||||
AWS Lambda handler that routes to function_a or function_b
|
||||
|
||||
Expected event structure:
|
||||
{
|
||||
"function_name": "function_a" or "function_b",
|
||||
"text_parameter": "your string here"
|
||||
}
|
||||
"""
|
||||
try:
|
||||
# DEBUG: Log the entire event
|
||||
print(f"Received event: {json.dumps(event)}")
|
||||
|
||||
# Handle different event sources
|
||||
body = None
|
||||
|
||||
# Check if body exists and is a string (API Gateway)
|
||||
if 'body' in event:
|
||||
if event['body'] is None:
|
||||
return {
|
||||
'statusCode': 400,
|
||||
'body': json.dumps({'error': 'Request body is empty'})
|
||||
}
|
||||
|
||||
if isinstance(event['body'], str):
|
||||
# Try to parse JSON
|
||||
try:
|
||||
body = json.loads(event['body'])
|
||||
except json.JSONDecodeError as e:
|
||||
return {
|
||||
'statusCode': 400,
|
||||
'body': json.dumps({
|
||||
'error': 'Invalid JSON in request body',
|
||||
'details': str(e),
|
||||
'received': event['body'][:100] # First 100 chars
|
||||
})
|
||||
}
|
||||
else:
|
||||
body = event['body']
|
||||
else:
|
||||
# Direct invocation (no body wrapper)
|
||||
body = event
|
||||
|
||||
print(f"Parsed body: {json.dumps(body)}")
|
||||
|
||||
# Get parameters
|
||||
function_name = body.get('function_name')
|
||||
text_parameter = body.get('text_parameter')
|
||||
|
||||
# Validate inputs
|
||||
if not function_name:
|
||||
return {
|
||||
'statusCode': 400,
|
||||
'body': json.dumps({'error': 'function_name is required'})
|
||||
}
|
||||
|
||||
if not text_parameter:
|
||||
return {
|
||||
'statusCode': 400,
|
||||
'body': json.dumps({'error': 'text_parameter is required'})
|
||||
}
|
||||
|
||||
# Route to the appropriate function
|
||||
if function_name == 'function_a':
|
||||
result = function_a.execute(text_parameter)
|
||||
elif function_name == 'function_b':
|
||||
result = function_b.execute(text_parameter)
|
||||
else:
|
||||
return {
|
||||
'statusCode': 400,
|
||||
'body': json.dumps({'error': f'Unknown function: {function_name}. Use "function_a" or "function_b"'})
|
||||
}
|
||||
|
||||
# Return success response
|
||||
return {
|
||||
'statusCode': 200,
|
||||
'body': json.dumps({
|
||||
'message': 'Success',
|
||||
'result': result
|
||||
})
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {str(e)}")
|
||||
import traceback
|
||||
print(traceback.format_exc())
|
||||
return {
|
||||
'statusCode': 500,
|
||||
'body': json.dumps({
|
||||
'error': str(e),
|
||||
'type': type(e).__name__
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
# For local testing
|
||||
if __name__ == "__main__":
|
||||
# Test event
|
||||
test_event = {
|
||||
'function_name': 'function_a',
|
||||
'text_parameter': 'Hello from Lambda!'
|
||||
}
|
||||
|
||||
result = lambda_handler(test_event, None)
|
||||
print(json.dumps(result, indent=2))
|
||||
4
label/infra/code/requirements.txt
Normal file
4
label/infra/code/requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
Pillow
|
||||
numpy
|
||||
scipy
|
||||
pdf2image
|
||||
8
label/infra/ecr/Pulumi.coodex.yaml
Normal file
8
label/infra/ecr/Pulumi.coodex.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
config:
|
||||
ecr_dev:entity_extraction_dev: ecr
|
||||
ecr_dev:environment: dev
|
||||
ecr_dev:ecr:
|
||||
entity_extraction:
|
||||
image_mutability: MUTABLE
|
||||
name: rekognition-valvulas-funcao
|
||||
ecr_dev:project: Rekognition Valvula Funcao
|
||||
3
label/infra/ecr/Pulumi.yaml
Normal file
3
label/infra/ecr/Pulumi.yaml
Normal file
@@ -0,0 +1,3 @@
|
||||
name: ecr_dev
|
||||
runtime: python
|
||||
description: Infraestrutura da aplicação ECR
|
||||
93
label/infra/ecr/README.md
Normal file
93
label/infra/ecr/README.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# ecr
|
||||
|
||||
|
||||
|
||||
## Getting started
|
||||
|
||||
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
|
||||
|
||||
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
|
||||
|
||||
## Add your files
|
||||
|
||||
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
|
||||
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
|
||||
|
||||
```
|
||||
cd existing_repo
|
||||
git remote add origin https://gitlab.shared.cloud.dnxbrasil.com.br/dnx-br/sandbox/genai/ecr.git
|
||||
git branch -M main
|
||||
git push -uf origin main
|
||||
```
|
||||
|
||||
## Integrate with your tools
|
||||
|
||||
- [ ] [Set up project integrations](https://gitlab.shared.cloud.dnxbrasil.com.br/dnx-br/sandbox/genai/ecr/-/settings/integrations)
|
||||
|
||||
## Collaborate with your team
|
||||
|
||||
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
|
||||
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
|
||||
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
|
||||
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
|
||||
- [ ] [Set auto-merge](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
|
||||
|
||||
## Test and Deploy
|
||||
|
||||
Use the built-in continuous integration in GitLab.
|
||||
|
||||
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
|
||||
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing (SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
|
||||
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
|
||||
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
|
||||
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
|
||||
|
||||
***
|
||||
|
||||
# Editing this README
|
||||
|
||||
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thanks to [makeareadme.com](https://www.makeareadme.com/) for this template.
|
||||
|
||||
## Suggestions for a good README
|
||||
|
||||
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
|
||||
|
||||
## Name
|
||||
Choose a self-explaining name for your project.
|
||||
|
||||
## Description
|
||||
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
|
||||
|
||||
## Badges
|
||||
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
|
||||
|
||||
## Visuals
|
||||
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
|
||||
|
||||
## Installation
|
||||
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
|
||||
|
||||
## Usage
|
||||
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
|
||||
|
||||
## Support
|
||||
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
|
||||
|
||||
## Roadmap
|
||||
If you have ideas for releases in the future, it is a good idea to list them in the README.
|
||||
|
||||
## Contributing
|
||||
State if you are open to contributions and what your requirements are for accepting them.
|
||||
|
||||
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
|
||||
|
||||
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
|
||||
|
||||
## Authors and acknowledgment
|
||||
Show your appreciation to those who have contributed to the project.
|
||||
|
||||
## License
|
||||
For open source projects, say how it is licensed.
|
||||
|
||||
## Project status
|
||||
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
|
||||
26
label/infra/ecr/__main__.py
Normal file
26
label/infra/ecr/__main__.py
Normal file
@@ -0,0 +1,26 @@
|
||||
import json
|
||||
import pulumi
|
||||
import pulumi_aws as aws
|
||||
|
||||
caller_identity = aws.get_caller_identity()
|
||||
account_id = caller_identity.account_id
|
||||
|
||||
config = pulumi.Config()
|
||||
project = config.require("project")
|
||||
environment = config.require("environment")
|
||||
|
||||
ecr_config = config.require_object("ecr")["entity_extraction"]
|
||||
|
||||
ecr_repo = aws.ecr.Repository(ecr_config['name'],
|
||||
name=ecr_config['name'],
|
||||
encryption_configurations=[{
|
||||
"encryption_type": "AES256",
|
||||
}],
|
||||
image_scanning_configuration={
|
||||
"scan_on_push": False,
|
||||
},
|
||||
image_tag_mutability=ecr_config['image_mutability'],
|
||||
opts = pulumi.ResourceOptions(protect=False))
|
||||
|
||||
|
||||
pulumi.export("url", pulumi.Output.concat("ECR REPO ID:", ecr_repo.id))
|
||||
5
label/infra/ecr/requirements.txt
Normal file
5
label/infra/ecr/requirements.txt
Normal file
@@ -0,0 +1,5 @@
|
||||
pulumi
|
||||
pulumi-aws
|
||||
pulumi-docker
|
||||
boto3
|
||||
setuptools
|
||||
736
label/infra/handler.py
Normal file
736
label/infra/handler.py
Normal file
@@ -0,0 +1,736 @@
|
||||
import boto3
|
||||
import json
|
||||
import base64
|
||||
from io import BytesIO
|
||||
from PIL import Image, ImageDraw
|
||||
import numpy as np
|
||||
from scipy.optimize import linear_sum_assignment
|
||||
import re
|
||||
from pdf2image import convert_from_bytes
|
||||
|
||||
# Configuration
|
||||
REGION = 'us-east-1'
|
||||
CUSTOM_LABELS_PROJECT_ARN = 'modelid'
|
||||
CONFIDENCE_THRESHOLD = 80
|
||||
|
||||
class InMemoryDiagramProcessor:
|
||||
"""Process diagrams entirely in memory for Lambda"""
|
||||
|
||||
def __init__(self, region=REGION, custom_labels_arn=CUSTOM_LABELS_PROJECT_ARN):
|
||||
self.textract_client = boto3.client('textract', region_name=region)
|
||||
self.rekognition_client = boto3.client('rekognition', region_name=region)
|
||||
self.custom_labels_arn = custom_labels_arn
|
||||
self.region = region
|
||||
|
||||
def segment_image(self, img, grid_size=(5, 5), overlap_percent=10):
|
||||
"""
|
||||
Segment PIL Image into grid with overlap (in-memory)
|
||||
Returns list of (PIL Image, position_info) tuples
|
||||
"""
|
||||
img_width, img_height = img.size
|
||||
rows, cols = grid_size
|
||||
|
||||
overlap_factor = overlap_percent / 100.0
|
||||
segment_width = img_width / cols
|
||||
segment_height = img_height / rows
|
||||
|
||||
step_width = segment_width * (1 - overlap_factor)
|
||||
step_height = segment_height * (1 - overlap_factor)
|
||||
|
||||
segments = []
|
||||
|
||||
for row in range(rows):
|
||||
for col in range(cols):
|
||||
left = int(col * step_width)
|
||||
top = int(row * step_height)
|
||||
right = int(min(left + segment_width, img_width))
|
||||
bottom = int(min(top + segment_height, img_height))
|
||||
|
||||
segment = img.crop((left, top, right, bottom))
|
||||
|
||||
position_info = {
|
||||
'row': row,
|
||||
'col': col,
|
||||
'left': left,
|
||||
'top': top,
|
||||
'right': right,
|
||||
'bottom': bottom,
|
||||
'width': right - left,
|
||||
'height': bottom - top
|
||||
}
|
||||
|
||||
segments.append((segment, position_info))
|
||||
|
||||
return segments
|
||||
|
||||
def pil_to_bytes(self, pil_image):
|
||||
"""Convert PIL Image to bytes for AWS API calls"""
|
||||
buffer = BytesIO()
|
||||
pil_image.save(buffer, format='PNG')
|
||||
return buffer.getvalue()
|
||||
|
||||
def detect_text_segment(self, segment_image):
|
||||
"""Detect text in PIL Image segment using Textract"""
|
||||
image_bytes = self.pil_to_bytes(segment_image)
|
||||
|
||||
result = self.textract_client.detect_document_text(
|
||||
Document={'Bytes': image_bytes}
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
def clean_text_from_segment(self, segment_image, textract_data,
|
||||
shrink_percent=8.5, keep_regex_list=None, min_confidence=80):
|
||||
"""Remove text from PIL Image segment (in-memory)"""
|
||||
compiled_patterns = []
|
||||
if keep_regex_list:
|
||||
for pattern in keep_regex_list:
|
||||
try:
|
||||
compiled_patterns.append(re.compile(pattern))
|
||||
except re.error:
|
||||
pass
|
||||
|
||||
img = segment_image.copy()
|
||||
width, height = img.size
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
words_removed = 0
|
||||
words_kept = 0
|
||||
|
||||
for block in textract_data['Blocks']:
|
||||
if block['BlockType'] == 'WORD':
|
||||
text = block['Text']
|
||||
confidence = block['Confidence']
|
||||
|
||||
should_keep = False
|
||||
|
||||
if confidence < min_confidence:
|
||||
should_keep = True
|
||||
words_kept += 1
|
||||
|
||||
if compiled_patterns:
|
||||
for pattern in compiled_patterns:
|
||||
if pattern.match(text):
|
||||
should_keep = True
|
||||
words_kept += 1
|
||||
break
|
||||
|
||||
if should_keep:
|
||||
continue
|
||||
|
||||
bbox = block['Geometry']['BoundingBox']
|
||||
left = int(bbox['Left'] * width)
|
||||
top = int(bbox['Top'] * height)
|
||||
box_width = int(bbox['Width'] * width)
|
||||
box_height = int(bbox['Height'] * height)
|
||||
|
||||
if shrink_percent > 0:
|
||||
shrink_factor = shrink_percent / 100
|
||||
width_reduction = int(box_width * shrink_factor / 2)
|
||||
height_reduction = int(box_height * shrink_factor / 2)
|
||||
|
||||
left += width_reduction
|
||||
top += height_reduction
|
||||
box_width -= width_reduction * 2
|
||||
box_height -= height_reduction * 2
|
||||
|
||||
draw.rectangle(
|
||||
[(left, top), (left + box_width, top + box_height)],
|
||||
fill='white'
|
||||
)
|
||||
words_removed += 1
|
||||
|
||||
return img, {'words_removed': words_removed, 'words_kept': words_kept}
|
||||
|
||||
def recognize_objects_segment(self, segment_image, min_confidence=CONFIDENCE_THRESHOLD):
|
||||
"""Recognize objects in PIL Image using Custom Labels"""
|
||||
image_bytes = self.pil_to_bytes(segment_image)
|
||||
|
||||
try:
|
||||
response = self.rekognition_client.detect_custom_labels(
|
||||
ProjectVersionArn=self.custom_labels_arn,
|
||||
Image={'Bytes': image_bytes},
|
||||
MinConfidence=min_confidence
|
||||
)
|
||||
|
||||
return {
|
||||
'custom_labels': response.get('CustomLabels', []),
|
||||
'success': True
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'custom_labels': [],
|
||||
'success': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def calculate_iou(self, box1, box2):
|
||||
"""Calculate IoU between two bounding boxes"""
|
||||
x_left = max(box1['left'], box2['left'])
|
||||
y_top = max(box1['top'], box2['top'])
|
||||
x_right = min(box1['right'], box2['right'])
|
||||
y_bottom = min(box1['bottom'], box2['bottom'])
|
||||
|
||||
if x_right < x_left or y_bottom < y_top:
|
||||
return 0.0
|
||||
|
||||
intersection_area = (x_right - x_left) * (y_bottom - y_top)
|
||||
|
||||
box1_area = (box1['right'] - box1['left']) * (box1['bottom'] - box1['top'])
|
||||
box2_area = (box2['right'] - box2['left']) * (box2['bottom'] - box2['top'])
|
||||
union_area = box1_area + box2_area - intersection_area
|
||||
|
||||
if union_area == 0:
|
||||
return 0.0
|
||||
|
||||
return intersection_area / union_area
|
||||
|
||||
def merge_bounding_boxes(self, boxes):
|
||||
"""Merge multiple bounding boxes into one"""
|
||||
if not boxes:
|
||||
return None
|
||||
|
||||
return {
|
||||
'left': min(box['left'] for box in boxes),
|
||||
'top': min(box['top'] for box in boxes),
|
||||
'right': max(box['right'] for box in boxes),
|
||||
'bottom': max(box['bottom'] for box in boxes)
|
||||
}
|
||||
|
||||
def deduplicate_detections(self, all_detections, iou_threshold=0.3):
|
||||
"""Remove duplicate detections using NMS"""
|
||||
if not all_detections:
|
||||
return []
|
||||
|
||||
detections_by_label = {}
|
||||
for det in all_detections:
|
||||
label = det['Name']
|
||||
if label not in detections_by_label:
|
||||
detections_by_label[label] = []
|
||||
detections_by_label[label].append(det)
|
||||
|
||||
deduplicated = []
|
||||
|
||||
for label, detections in detections_by_label.items():
|
||||
detections = sorted(detections, key=lambda x: x['Confidence'], reverse=True)
|
||||
|
||||
groups = []
|
||||
used = set()
|
||||
|
||||
for i, det in enumerate(detections):
|
||||
if i in used:
|
||||
continue
|
||||
|
||||
group = [det]
|
||||
used.add(i)
|
||||
|
||||
for j, other_det in enumerate(detections):
|
||||
if j in used or j == i:
|
||||
continue
|
||||
|
||||
iou = self.calculate_iou(det['global_bbox'], other_det['global_bbox'])
|
||||
|
||||
if iou > iou_threshold:
|
||||
group.append(other_det)
|
||||
used.add(j)
|
||||
|
||||
groups.append(group)
|
||||
|
||||
for group in groups:
|
||||
if len(group) == 1:
|
||||
deduplicated.append(group[0])
|
||||
else:
|
||||
merged_bbox = self.merge_bounding_boxes([d['global_bbox'] for d in group])
|
||||
merged_bbox['width'] = merged_bbox['right'] - merged_bbox['left']
|
||||
merged_bbox['height'] = merged_bbox['bottom'] - merged_bbox['top']
|
||||
|
||||
avg_confidence = sum(d['Confidence'] for d in group) / len(group)
|
||||
|
||||
merged_detection = {
|
||||
'Name': label,
|
||||
'Confidence': avg_confidence,
|
||||
'global_bbox': merged_bbox,
|
||||
'merged_from': len(group)
|
||||
}
|
||||
|
||||
deduplicated.append(merged_detection)
|
||||
|
||||
return deduplicated
|
||||
|
||||
def deduplicate_text_detections(self, all_text_detections, iou_threshold=0.5):
|
||||
"""Remove duplicate text detections"""
|
||||
if not all_text_detections:
|
||||
return []
|
||||
|
||||
all_text_detections = sorted(all_text_detections, key=lambda x: x['confidence'], reverse=True)
|
||||
|
||||
deduplicated = []
|
||||
used = set()
|
||||
|
||||
for i, text_det in enumerate(all_text_detections):
|
||||
if i in used:
|
||||
continue
|
||||
|
||||
group = [text_det]
|
||||
used.add(i)
|
||||
|
||||
for j, other_det in enumerate(all_text_detections):
|
||||
if j in used or j == i:
|
||||
continue
|
||||
|
||||
if text_det['text'].lower() == other_det['text'].lower():
|
||||
iou = self.calculate_iou(text_det['global_bbox'], other_det['global_bbox'])
|
||||
|
||||
if iou > iou_threshold:
|
||||
group.append(other_det)
|
||||
used.add(j)
|
||||
|
||||
deduplicated.append(text_det)
|
||||
|
||||
return deduplicated
|
||||
|
||||
def get_bbox_center(self, bbox):
|
||||
"""Get center point of bounding box"""
|
||||
center_x = bbox['left'] + bbox['width'] / 2
|
||||
center_y = bbox['top'] + bbox['height'] / 2
|
||||
return (center_x, center_y)
|
||||
|
||||
def calculate_distance(self, center1, center2):
|
||||
"""Calculate Euclidean distance"""
|
||||
return np.sqrt((center1[0] - center2[0])**2 + (center1[1] - center2[1])**2)
|
||||
|
||||
def match_objects_to_text_by_type(self, objects, all_text_detections, max_distance=200):
|
||||
"""Match objects to text based on object type"""
|
||||
VM_LABEL_OBJECTS = ['globo', 'gaveta', 'retencao', 'espera']
|
||||
TWO_LABEL_OBJECTS = ['sis_con_dist', 'instrumento_local']
|
||||
|
||||
vm_label_objects = []
|
||||
two_label_objects = []
|
||||
single_label_objects = []
|
||||
|
||||
for obj in objects:
|
||||
obj_name = obj['Name'].lower()
|
||||
if obj_name in VM_LABEL_OBJECTS:
|
||||
vm_label_objects.append(obj)
|
||||
elif obj_name in TWO_LABEL_OBJECTS:
|
||||
two_label_objects.append(obj)
|
||||
else:
|
||||
single_label_objects.append(obj)
|
||||
|
||||
vm_pattern = re.compile(r'VM-\d{4}')
|
||||
vm_texts = [t for t in all_text_detections if vm_pattern.search(t['text'])]
|
||||
other_texts = [t for t in all_text_detections if not vm_pattern.search(t['text'])]
|
||||
|
||||
all_matches = []
|
||||
all_unmatched_objects = []
|
||||
all_unmatched_texts = []
|
||||
used_texts = set()
|
||||
|
||||
# Part 1: Match VM-#### objects using Hungarian algorithm
|
||||
if vm_label_objects and vm_texts:
|
||||
n_objects = len(vm_label_objects)
|
||||
n_texts = len(vm_texts)
|
||||
|
||||
max_dim = max(n_objects, n_texts)
|
||||
cost_matrix = np.full((max_dim, max_dim), 1e10)
|
||||
|
||||
for i, obj in enumerate(vm_label_objects):
|
||||
obj_center = self.get_bbox_center(obj['global_bbox'])
|
||||
|
||||
for j, text_data in enumerate(vm_texts):
|
||||
text_center = self.get_bbox_center(text_data['global_bbox'])
|
||||
distance = self.calculate_distance(obj_center, text_center)
|
||||
|
||||
if max_distance and distance > max_distance:
|
||||
cost_matrix[i, j] = 1e10
|
||||
else:
|
||||
cost_matrix[i, j] = distance
|
||||
|
||||
row_indices, col_indices = linear_sum_assignment(cost_matrix)
|
||||
|
||||
matched_obj_indices = set()
|
||||
matched_text_indices = set()
|
||||
|
||||
for obj_idx, text_idx in zip(row_indices, col_indices):
|
||||
if (obj_idx >= n_objects or text_idx >= n_texts or
|
||||
cost_matrix[obj_idx, text_idx] >= 1e10):
|
||||
continue
|
||||
|
||||
distance = cost_matrix[obj_idx, text_idx]
|
||||
|
||||
match = {
|
||||
'object_name': vm_label_objects[obj_idx]['Name'],
|
||||
'object_bbox': vm_label_objects[obj_idx]['global_bbox'],
|
||||
'object_confidence': vm_label_objects[obj_idx]['Confidence'],
|
||||
'text': vm_texts[text_idx]['text'],
|
||||
'text_bbox': vm_texts[text_idx]['global_bbox'],
|
||||
'text_confidence': vm_texts[text_idx]['confidence'],
|
||||
'distance': distance,
|
||||
'match_type': 'vm_label'
|
||||
}
|
||||
|
||||
all_matches.append(match)
|
||||
matched_obj_indices.add(obj_idx)
|
||||
matched_text_indices.add(text_idx)
|
||||
|
||||
all_unmatched_objects.extend([vm_label_objects[i] for i in range(n_objects)
|
||||
if i not in matched_obj_indices])
|
||||
all_unmatched_texts.extend([vm_texts[j] for j in range(n_texts)
|
||||
if j not in matched_text_indices])
|
||||
|
||||
# Part 2: Match two-label objects
|
||||
for obj in two_label_objects:
|
||||
obj_bbox = obj['global_bbox']
|
||||
obj_center_x = obj_bbox['left'] + obj_bbox['width'] / 2
|
||||
obj_center_y = obj_bbox['top'] + obj_bbox['height'] / 2
|
||||
|
||||
texts_inside = []
|
||||
for text_data in other_texts:
|
||||
if id(text_data) in used_texts:
|
||||
continue
|
||||
|
||||
text_bbox = text_data['global_bbox']
|
||||
text_center_x = text_bbox['left'] + text_bbox['width'] / 2
|
||||
text_center_y = text_bbox['top'] + text_bbox['height'] / 2
|
||||
|
||||
if (obj_bbox['left'] <= text_center_x <= obj_bbox['right'] and
|
||||
obj_bbox['top'] <= text_center_y <= obj_bbox['bottom']):
|
||||
|
||||
distance_to_center = self.calculate_distance(
|
||||
(obj_center_x, obj_center_y),
|
||||
(text_center_x, text_center_y)
|
||||
)
|
||||
|
||||
texts_inside.append({
|
||||
'text_data': text_data,
|
||||
'distance_to_center': distance_to_center,
|
||||
'y_position': text_center_y
|
||||
})
|
||||
|
||||
if len(texts_inside) >= 2:
|
||||
texts_inside.sort(key=lambda t: t['distance_to_center'])
|
||||
closest_two = texts_inside[:2]
|
||||
closest_two.sort(key=lambda t: t['y_position'])
|
||||
|
||||
top_text = closest_two[0]['text_data']
|
||||
bottom_text = closest_two[1]['text_data']
|
||||
|
||||
match = {
|
||||
'object_name': obj['Name'],
|
||||
'object_bbox': obj_bbox,
|
||||
'object_confidence': obj['Confidence'],
|
||||
'text': f"{top_text['text']} / {bottom_text['text']}",
|
||||
'text_top': top_text['text'],
|
||||
'text_bottom': bottom_text['text'],
|
||||
'text_bbox_top': top_text['global_bbox'],
|
||||
'text_bbox_bottom': bottom_text['global_bbox'],
|
||||
'text_confidence_top': top_text['confidence'],
|
||||
'text_confidence_bottom': bottom_text['confidence'],
|
||||
'distance': 0,
|
||||
'match_type': 'two_labels'
|
||||
}
|
||||
|
||||
all_matches.append(match)
|
||||
used_texts.add(id(top_text))
|
||||
used_texts.add(id(bottom_text))
|
||||
else:
|
||||
all_unmatched_objects.append(obj)
|
||||
|
||||
# Part 3: Match single-label objects
|
||||
for obj in single_label_objects:
|
||||
obj_bbox = obj['global_bbox']
|
||||
obj_center_x = obj_bbox['left'] + obj_bbox['width'] / 2
|
||||
obj_center_y = obj_bbox['top'] + obj_bbox['height'] / 2
|
||||
|
||||
texts_inside = []
|
||||
for text_data in other_texts:
|
||||
if id(text_data) in used_texts:
|
||||
continue
|
||||
|
||||
text_bbox = text_data['global_bbox']
|
||||
text_center_x = text_bbox['left'] + text_bbox['width'] / 2
|
||||
text_center_y = text_bbox['top'] + text_bbox['height'] / 2
|
||||
|
||||
if (obj_bbox['left'] <= text_center_x <= obj_bbox['right'] and
|
||||
obj_bbox['top'] <= text_center_y <= obj_bbox['bottom']):
|
||||
texts_inside.append(text_data)
|
||||
|
||||
if texts_inside:
|
||||
closest_text = min(texts_inside, key=lambda t: self.calculate_distance(
|
||||
(obj_center_x, obj_center_y),
|
||||
(t['global_bbox']['left'] + t['global_bbox']['width'] / 2,
|
||||
t['global_bbox']['top'] + t['global_bbox']['height'] / 2)
|
||||
))
|
||||
|
||||
text_center_x = closest_text['global_bbox']['left'] + closest_text['global_bbox']['width'] / 2
|
||||
text_center_y = closest_text['global_bbox']['top'] + closest_text['global_bbox']['height'] / 2
|
||||
distance_to_center = self.calculate_distance(
|
||||
(obj_center_x, obj_center_y),
|
||||
(text_center_x, text_center_y)
|
||||
)
|
||||
|
||||
match = {
|
||||
'object_name': obj['Name'],
|
||||
'object_bbox': obj_bbox,
|
||||
'object_confidence': obj['Confidence'],
|
||||
'text': closest_text['text'],
|
||||
'text_bbox': closest_text['global_bbox'],
|
||||
'text_confidence': closest_text['confidence'],
|
||||
'distance': distance_to_center,
|
||||
'match_type': 'single_label'
|
||||
}
|
||||
|
||||
all_matches.append(match)
|
||||
used_texts.add(id(closest_text))
|
||||
else:
|
||||
all_unmatched_objects.append(obj)
|
||||
|
||||
for text_data in other_texts:
|
||||
if id(text_data) not in used_texts:
|
||||
all_unmatched_texts.append(text_data)
|
||||
|
||||
return {
|
||||
'matches': all_matches,
|
||||
'unmatched_objects': all_unmatched_objects,
|
||||
'unmatched_texts': all_unmatched_texts,
|
||||
'n_objects': len(objects),
|
||||
'n_texts': len(all_text_detections),
|
||||
'matching_rate': len(all_matches) / len(objects) if objects else 0
|
||||
}
|
||||
|
||||
def process_diagram_inmemory(self, pil_image, grid_size=(5, 5), overlap_percent=10,
|
||||
keep_regex_list=None, min_confidence=80,
|
||||
custom_labels_confidence=80, iou_threshold=0.3,
|
||||
matching_max_distance=200):
|
||||
"""
|
||||
Complete in-memory pipeline
|
||||
Returns only the matches
|
||||
"""
|
||||
img_width, img_height = pil_image.size
|
||||
|
||||
# Step 1: Segment
|
||||
segments = self.segment_image(pil_image, grid_size, overlap_percent)
|
||||
|
||||
all_global_detections = []
|
||||
all_text_detections = []
|
||||
|
||||
# Step 2-4: Process each segment
|
||||
for segment_image, position_info in segments:
|
||||
# Detect text
|
||||
textract_data = self.detect_text_segment(segment_image)
|
||||
|
||||
# Extract text with global coordinates
|
||||
for block in textract_data['Blocks']:
|
||||
if block['BlockType'] == 'WORD':
|
||||
bbox = block['Geometry']['BoundingBox']
|
||||
|
||||
seg_left = position_info['left']
|
||||
seg_top = position_info['top']
|
||||
seg_width = position_info['width']
|
||||
seg_height = position_info['height']
|
||||
|
||||
global_left = seg_left + int(bbox['Left'] * seg_width)
|
||||
global_top = seg_top + int(bbox['Top'] * seg_height)
|
||||
global_width = int(bbox['Width'] * seg_width)
|
||||
global_height = int(bbox['Height'] * seg_height)
|
||||
|
||||
all_text_detections.append({
|
||||
'text': block['Text'],
|
||||
'confidence': block['Confidence'],
|
||||
'global_bbox': {
|
||||
'left': global_left,
|
||||
'top': global_top,
|
||||
'right': global_left + global_width,
|
||||
'bottom': global_top + global_height,
|
||||
'width': global_width,
|
||||
'height': global_height
|
||||
}
|
||||
})
|
||||
|
||||
# Clean text
|
||||
cleaned_image, _ = self.clean_text_from_segment(
|
||||
segment_image, textract_data,
|
||||
keep_regex_list=keep_regex_list, min_confidence=min_confidence
|
||||
)
|
||||
|
||||
# Recognize objects
|
||||
detection_results = self.recognize_objects_segment(
|
||||
cleaned_image, min_confidence=custom_labels_confidence
|
||||
)
|
||||
|
||||
if detection_results['success']:
|
||||
labels = detection_results['custom_labels']
|
||||
|
||||
for label in labels:
|
||||
if 'Geometry' in label and 'BoundingBox' in label['Geometry']:
|
||||
bbox = label['Geometry']['BoundingBox']
|
||||
|
||||
seg_left = position_info['left']
|
||||
seg_top = position_info['top']
|
||||
seg_width = position_info['width']
|
||||
seg_height = position_info['height']
|
||||
|
||||
global_left = seg_left + int(bbox['Left'] * seg_width)
|
||||
global_top = seg_top + int(bbox['Top'] * seg_height)
|
||||
global_width = int(bbox['Width'] * seg_width)
|
||||
global_height = int(bbox['Height'] * seg_height)
|
||||
|
||||
global_detection = {
|
||||
'Name': label['Name'],
|
||||
'Confidence': label['Confidence'],
|
||||
'global_bbox': {
|
||||
'left': global_left,
|
||||
'top': global_top,
|
||||
'right': global_left + global_width,
|
||||
'bottom': global_top + global_height,
|
||||
'width': global_width,
|
||||
'height': global_height
|
||||
}
|
||||
}
|
||||
|
||||
all_global_detections.append(global_detection)
|
||||
|
||||
# Step 5: Deduplicate
|
||||
deduplicated_detections = self.deduplicate_detections(
|
||||
all_global_detections, iou_threshold=iou_threshold
|
||||
)
|
||||
|
||||
deduplicated_text = self.deduplicate_text_detections(
|
||||
all_text_detections, iou_threshold=0.5
|
||||
)
|
||||
|
||||
# Step 6: Match objects to text
|
||||
matching_results = self.match_objects_to_text_by_type(
|
||||
objects=deduplicated_detections,
|
||||
all_text_detections=deduplicated_text,
|
||||
max_distance=matching_max_distance
|
||||
)
|
||||
|
||||
return matching_results
|
||||
|
||||
|
||||
# ==================== LAMBDA HANDLER ====================
|
||||
|
||||
def lambda_handler(event, context):
|
||||
"""
|
||||
AWS Lambda handler function
|
||||
|
||||
Expected event formats:
|
||||
1. PDF as base64 in body:
|
||||
{
|
||||
"pdf_base64": "<base64-encoded-pdf>",
|
||||
"config": {
|
||||
"grid_size": [5, 5],
|
||||
"overlap_percent": 10,
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
2. PDF in S3:
|
||||
{
|
||||
"s3_bucket": "bucket-name",
|
||||
"s3_key": "path/to/file.pdf",
|
||||
"config": {...}
|
||||
}
|
||||
"""
|
||||
|
||||
try:
|
||||
# Parse event
|
||||
if isinstance(event.get('body'), str):
|
||||
body = json.loads(event['body'])
|
||||
else:
|
||||
body = event
|
||||
|
||||
# Extract configuration
|
||||
config = body.get('config', {})
|
||||
grid_size = tuple(config.get('grid_size', [5, 5]))
|
||||
overlap_percent = config.get('overlap_percent', 10)
|
||||
keep_regex_list = config.get('keep_regex_list', [r'\+', r'.*[Xx].*', r'\*', r'\\'])
|
||||
min_confidence = config.get('min_confidence', 80)
|
||||
custom_labels_confidence = config.get('custom_labels_confidence', 60)
|
||||
iou_threshold = config.get('iou_threshold', 0.3)
|
||||
matching_max_distance = config.get('matching_max_distance', 200)
|
||||
custom_labels_arn = config.get('custom_labels_arn', CUSTOM_LABELS_PROJECT_ARN)
|
||||
dpi = config.get('dpi', 200)
|
||||
|
||||
# Get PDF bytes
|
||||
if 'pdf_base64' in body:
|
||||
# PDF provided as base64 in request
|
||||
pdf_bytes = base64.b64decode(body['pdf_base64'])
|
||||
elif 's3_bucket' in body and 's3_key' in body:
|
||||
# PDF in S3
|
||||
s3_client = boto3.client('s3')
|
||||
response = s3_client.get_object(
|
||||
Bucket=body['s3_bucket'],
|
||||
Key=body['s3_key']
|
||||
)
|
||||
pdf_bytes = response['Body'].read()
|
||||
else:
|
||||
return {
|
||||
'statusCode': 400,
|
||||
'body': json.dumps({
|
||||
'error': 'Must provide either pdf_base64 or s3_bucket/s3_key'
|
||||
})
|
||||
}
|
||||
|
||||
# Convert PDF to image (first page only, or specify page)
|
||||
page_num = config.get('page', 0) # 0-indexed
|
||||
images = convert_from_bytes(pdf_bytes, dpi=dpi, first_page=page_num+1, last_page=page_num+1)
|
||||
|
||||
if not images:
|
||||
return {
|
||||
'statusCode': 400,
|
||||
'body': json.dumps({
|
||||
'error': 'Could not convert PDF to image'
|
||||
})
|
||||
}
|
||||
|
||||
diagram_image = images[0]
|
||||
|
||||
# Initialize processor
|
||||
processor = InMemoryDiagramProcessor(
|
||||
region=REGION,
|
||||
custom_labels_arn=custom_labels_arn
|
||||
)
|
||||
|
||||
# Process diagram
|
||||
matching_results = processor.process_diagram_inmemory(
|
||||
pil_image=diagram_image,
|
||||
grid_size=grid_size,
|
||||
overlap_percent=overlap_percent,
|
||||
keep_regex_list=keep_regex_list,
|
||||
min_confidence=min_confidence,
|
||||
custom_labels_confidence=custom_labels_confidence,
|
||||
iou_threshold=iou_threshold,
|
||||
matching_max_distance=matching_max_distance
|
||||
)
|
||||
|
||||
# Return only matches
|
||||
return {
|
||||
'statusCode': 200,
|
||||
'headers': {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
'body': json.dumps({
|
||||
'matches': matching_results['matches'],
|
||||
'summary': {
|
||||
'total_matches': len(matching_results['matches']),
|
||||
'unmatched_objects': len(matching_results['unmatched_objects']),
|
||||
'unmatched_texts': len(matching_results['unmatched_texts']),
|
||||
'matching_rate': matching_results['matching_rate']
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error processing diagram: {str(e)}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
return {
|
||||
'statusCode': 500,
|
||||
'body': json.dumps({
|
||||
'error': str(e),
|
||||
'error_type': type(e).__name__
|
||||
})
|
||||
}
|
||||
35
label/infra/lambda_api_gateway/Pulumi.coodez.yaml
Normal file
35
label/infra/lambda_api_gateway/Pulumi.coodez.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
config:
|
||||
aws:region: us-east-1
|
||||
project_name: labels-valvula-bloco-funcao
|
||||
lambda-api:
|
||||
- name: labels-valvula-bloco-funcao
|
||||
network_config:
|
||||
is_private: true
|
||||
vpc_id: vpc-098bd05c4ef524627
|
||||
private_subnet_ids: #using API VPC Endpoint in 1 subnet is cheaper than using it in 2 or more
|
||||
- subnet-0c8b5a9233eff22b4
|
||||
# - subnet-00adc4773686d8c1b
|
||||
timeout: 900
|
||||
memory: 2048
|
||||
ecr:
|
||||
repo_name: rekognition-valvulas-funcao
|
||||
tag: latest
|
||||
#env_vars:
|
||||
provisioned_concurrency: 0
|
||||
api_gateway:
|
||||
use_api_gw: true
|
||||
communication_type: REST #REST # WEBSOCKET | HTTP | REST #TODO implement all
|
||||
type: REGIONAL # PRIVATE | REGIONAL | EDGE
|
||||
authorization: NONE # | AWS_IAM | ...
|
||||
allow_inbound_any: true
|
||||
allow_inbound_cidrs:
|
||||
- 3.14.44.224/32 # IP VPN DNX
|
||||
create_and_allow_vpce: true # only used for PRIVATE api type
|
||||
stage_name: dev
|
||||
routes:
|
||||
- method: POST
|
||||
path: /execute
|
||||
iam:
|
||||
managed_policies: []
|
||||
#custom_policies:
|
||||
|
||||
11
label/infra/lambda_api_gateway/Pulumi.yaml
Normal file
11
label/infra/lambda_api_gateway/Pulumi.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
name: lambda-api
|
||||
runtime:
|
||||
name: python
|
||||
options:
|
||||
toolchain: pip
|
||||
virtualenv: venv
|
||||
description: A Python program to deploy a serverless application on AWS
|
||||
config:
|
||||
pulumi:tags:
|
||||
value:
|
||||
pulumi:template: serverless-aws-python
|
||||
110
label/infra/lambda_api_gateway/__main__.py
Normal file
110
label/infra/lambda_api_gateway/__main__.py
Normal file
@@ -0,0 +1,110 @@
|
||||
import json
|
||||
import pulumi
|
||||
import pulumi_aws as aws
|
||||
import pulumi_aws_apigateway as apigateway
|
||||
import api_gw
|
||||
|
||||
config = pulumi.Config()
|
||||
aws_config = pulumi.Config("aws")
|
||||
aws_region = aws_config.require("region")
|
||||
account_id = aws.get_caller_identity().account_id
|
||||
|
||||
|
||||
def create_lambda_role(lambda_name, iam_config=None):
|
||||
"""Create IAM role for Lambda with configurable policies"""
|
||||
|
||||
# Base managed policies
|
||||
managed_policies = [aws.iam.ManagedPolicy.AWS_LAMBDA_BASIC_EXECUTION_ROLE,
|
||||
aws.iam.ManagedPolicy.AWS_LAMBDA_VPC_ACCESS_EXECUTION_ROLE]
|
||||
|
||||
if iam_config and "managed_policies" in iam_config:
|
||||
managed_policies.extend(iam_config["managed_policies"])
|
||||
|
||||
role = aws.iam.Role(f"role-{lambda_name}",
|
||||
assume_role_policy=json.dumps({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Action": "sts:AssumeRole",
|
||||
"Effect": "Allow",
|
||||
"Principal": {"Service": "lambda.amazonaws.com"},
|
||||
}],
|
||||
}),
|
||||
managed_policy_arns=managed_policies
|
||||
)
|
||||
|
||||
# Create custom inline policies from YAML config
|
||||
if iam_config and "custom_policies" in iam_config:
|
||||
for policy in iam_config["custom_policies"]:
|
||||
aws.iam.RolePolicy(f"{lambda_name}-{policy['name']}",
|
||||
role=role.id,
|
||||
policy=json.dumps({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Effect": policy["effect"],
|
||||
"Action": policy["actions"],
|
||||
"Resource": policy["resources"]
|
||||
}]
|
||||
})
|
||||
)
|
||||
|
||||
return role
|
||||
|
||||
lambda_api_configs = config.require_object("lambda-api")
|
||||
for l_api in lambda_api_configs:
|
||||
ecr_repo = aws.ecr.get_repository(name=l_api["ecr"]["repo_name"])
|
||||
ecr_image = aws.ecr.get_image(repository_name=l_api["ecr"]["repo_name"], image_tag=l_api["ecr"]["tag"])
|
||||
lambda_name = l_api["name"]
|
||||
if not l_api["network_config"]["is_private"]:
|
||||
raise "Not implemented yet: Public lambda function"
|
||||
else:
|
||||
lambda_sg = aws.ec2.SecurityGroup(f"lambda-sg-{lambda_name}",
|
||||
description=f"SG for Lambda {lambda_name}",
|
||||
egress=[{
|
||||
"cidr_blocks": ["0.0.0.0/0"],
|
||||
"from_port": 0,
|
||||
"protocol": "-1",
|
||||
"to_port": 0,
|
||||
}],
|
||||
name=lambda_name,
|
||||
vpc_id=l_api["network_config"]["vpc_id"],
|
||||
)
|
||||
|
||||
# print(pulumi.Output.all(ecr_repo.repository_url, ecr_image.image_digest).apply(lambda args: f'{args[0]}@{args[1]}'))
|
||||
# Create role for this specific Lambda
|
||||
role = create_lambda_role(lambda_name, l_api.get("iam"))
|
||||
if "env_vars" in l_api:
|
||||
variables={k:v for k,v in l_api["env_vars"].items()}
|
||||
else:
|
||||
variables={}
|
||||
# Define the Lambda function, replacing <IMAGE_URI> with your actual image URI
|
||||
fn = aws.lambda_.Function(f"{lambda_name}",
|
||||
package_type="Image",
|
||||
# image_uri=ecr_repo.repository_url.apply(lambda url: f"{url}:latest"), # Assuming 'latest' tag
|
||||
image_uri=pulumi.Output.all(ecr_repo.repository_url, ecr_image.image_digest).apply(lambda args: f'{args[0]}@{args[1]}'),
|
||||
role=role.arn,
|
||||
timeout=l_api["timeout"],
|
||||
memory_size=l_api["memory"],
|
||||
environment={
|
||||
"variables": variables
|
||||
},
|
||||
vpc_config=dict(
|
||||
ipv6_allowed_for_dual_stack = False,
|
||||
subnet_ids = l_api["network_config"]["private_subnet_ids"],
|
||||
security_group_ids=[lambda_sg.id]
|
||||
),
|
||||
publish=l_api["provisioned_concurrency"]>0, #necessary for provisioned concurrency
|
||||
)
|
||||
|
||||
if l_api["provisioned_concurrency"]>0:
|
||||
lambda_concurrency = aws.lambda_.ProvisionedConcurrencyConfig(l_api["name"],
|
||||
provisioned_concurrent_executions=l_api["provisioned_concurrency"],
|
||||
function_name=fn.name,
|
||||
qualifier=fn.version
|
||||
)
|
||||
|
||||
api_config = l_api["api_gateway"]
|
||||
if api_config["use_api_gw"]:
|
||||
if api_config["communication_type"]=="HTTP":
|
||||
api_gw.create_api_gatewayv2(api_config, l_api, fn, aws_region, config.require('project_name'))
|
||||
else:
|
||||
api_gw.create_api_gateway(api_config, l_api, fn, account_id, aws_region, config.require('project_name'))
|
||||
407
label/infra/lambda_api_gateway/api_gw.py
Normal file
407
label/infra/lambda_api_gateway/api_gw.py
Normal file
@@ -0,0 +1,407 @@
|
||||
import pulumi
|
||||
import pulumi_aws as aws
|
||||
import json
|
||||
# import time
|
||||
|
||||
def create_api_gatewayv2(api_config, l_api, fn, aws_region, project_name):
|
||||
# Create VPC endpoint for PRIVATE API Gateway
|
||||
# if api_config["type"] == "PRIVATE":
|
||||
# vpce = aws.ec2.VpcEndpoint(f"vpce-{l_api['name']}",
|
||||
# vpc_id=l_api["network_config"]["vpc_id"],
|
||||
# service_name=f"com.amazonaws.{aws_region}.execute-api",
|
||||
# subnet_ids=l_api["network_config"]["private_subnet_ids"],
|
||||
# private_dns_enabled=True,
|
||||
# vpc_endpoint_type="Interface"
|
||||
# )
|
||||
|
||||
# HTTP API apigwv2
|
||||
# Create API Gateway V2 HTTP API
|
||||
api = aws.apigatewayv2.Api(f"api-{l_api['name']}",
|
||||
name=l_api['name'],
|
||||
protocol_type="HTTP",
|
||||
)
|
||||
|
||||
sg_vpc_link = aws.ec2.SecurityGroup(f"secgroup-{l_api['name']}",
|
||||
vpc_id=l_api["network_config"]["vpc_id"],
|
||||
ingress=[
|
||||
aws.ec2.SecurityGroupIngressArgs(
|
||||
protocol="tcp",
|
||||
from_port=0,
|
||||
to_port=0,
|
||||
cidr_blocks=["3.14.44.224/32"]
|
||||
)
|
||||
],
|
||||
egress=[
|
||||
aws.ec2.SecurityGroupEgressArgs(
|
||||
protocol="-1",
|
||||
from_port=0,
|
||||
to_port=0,
|
||||
cidr_blocks=["0.0.0.0/0"]
|
||||
)
|
||||
]
|
||||
)
|
||||
# Create a VPC Link
|
||||
vpc_link = aws.apigatewayv2.VpcLink(
|
||||
f"VpcLink-{project_name}",
|
||||
subnet_ids=l_api["network_config"]["private_subnet_ids"],
|
||||
security_group_ids=sg_vpc_link,
|
||||
)
|
||||
|
||||
# Add IAM resource policy to restrict access by VPC (for PRIVATE type)
|
||||
# if api_config["type"] == "PRIVATE":
|
||||
# api_policy = aws.apigatewayv2.ApiPolicy(f"policy-{l_api['name']}",
|
||||
# api_id=api.id,
|
||||
# policy=pulumi.Output.all(api.arn, l_api["network_config"]["vpc_id"]).apply(
|
||||
# lambda args: json.dumps({
|
||||
# "Version": "2012-10-17",
|
||||
# "Statement": [{
|
||||
# "Effect": "Allow",
|
||||
# "Principal": "*",
|
||||
# "Action": "execute-api:Invoke",
|
||||
# "Resource": f"{args[0]}/*",
|
||||
# "Condition": {
|
||||
# "StringEquals": {
|
||||
# "aws:sourceVpc": args[1]
|
||||
# }
|
||||
# }
|
||||
# }]
|
||||
# })
|
||||
# )
|
||||
# )
|
||||
|
||||
integration_get = None
|
||||
integration_post = None
|
||||
# Create Lambda integrations
|
||||
for route in api_config["routes"]:
|
||||
if route["method"] == "GET" and not integration_get:
|
||||
integration_get = aws.apigatewayv2.Integration(f"integration-{l_api['name']}-{route['path'].replace('/', '-')}",
|
||||
api_id=api.id,
|
||||
integration_type="AWS_PROXY",
|
||||
integration_method="GET",
|
||||
integration_uri=fn.invoke_arn,
|
||||
# connection_type="INTERNET",
|
||||
payload_format_version="2.0",
|
||||
# connection_id=vpc_link.id
|
||||
)
|
||||
elif route["method"] == "POST" and not integration_post:
|
||||
integration_post = aws.apigatewayv2.Integration(f"integration-{l_api['name']}-{route['path'].replace('/', '-')}",
|
||||
api_id=api.id,
|
||||
integration_type="AWS_PROXY",
|
||||
integration_uri=fn.invoke_arn,
|
||||
integration_method="POST",
|
||||
payload_format_version="2.0"
|
||||
)
|
||||
|
||||
# Create routes dynamically from config
|
||||
routes = []
|
||||
for route in api_config["routes"]:
|
||||
if route['method'] == "GET":
|
||||
integration = integration_get
|
||||
elif route['method'] == "POST":
|
||||
integration = integration_post
|
||||
r = aws.apigatewayv2.Route(
|
||||
f"route-{l_api['name']}-{route['method']}-{route['path'].replace('/', '-')}",
|
||||
api_id=api.id,
|
||||
route_key=f"{route['method']} {route['path']}",
|
||||
target=integration.id.apply(lambda id: f"integrations/{id}")
|
||||
)
|
||||
routes.append(r)
|
||||
|
||||
# Lambda permission for API Gateway
|
||||
permission = aws.lambda_.Permission(
|
||||
f"permission-{l_api['name']}",
|
||||
action="lambda:InvokeFunction",
|
||||
function=fn.name,
|
||||
principal="apigateway.amazonaws.com",
|
||||
source_arn=api.execution_arn.apply(lambda arn: f"{arn}/*/*")
|
||||
)
|
||||
|
||||
# Create stage
|
||||
stage = aws.apigatewayv2.Stage(f"stage-{l_api['name']}",
|
||||
api_id=api.id,
|
||||
name="$default",
|
||||
auto_deploy=True
|
||||
)
|
||||
|
||||
# Export the API URL
|
||||
pulumi.export(f"{l_api['name']}-url", api.api_endpoint)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def create_api_gateway(api_config, l_api, fn, account_id, aws_region, project_name):
|
||||
|
||||
vpce_id = None
|
||||
# Create API Gateway
|
||||
if api_config["type"] == "PRIVATE":
|
||||
if api_config["create_and_allow_vpce"]:
|
||||
# Cria uma nova Security Group para o VPC Endpoint
|
||||
vpc_endpoint_sg = aws.ec2.SecurityGroup(f"api-gateway-vpce-sg-{project_name}",
|
||||
vpc_id=l_api["network_config"]["vpc_id"],
|
||||
description=f"Security Group for API Gateway VPC Endpoint - {project_name}",
|
||||
ingress=[
|
||||
aws.ec2.SecurityGroupIngressArgs(
|
||||
protocol="tcp",
|
||||
from_port=0,
|
||||
to_port=0,
|
||||
cidr_blocks=["0.0.0.0/0"],
|
||||
description="Allow HTTPS traffic to API Gateway Endpoint"
|
||||
),
|
||||
],
|
||||
egress=[
|
||||
# Permite todo o tráfego de saída. Pode ser restringido se necessário.
|
||||
aws.ec2.SecurityGroupEgressArgs(
|
||||
protocol="-1", # "-1" significa todos os protocolos
|
||||
from_port=0,
|
||||
to_port=0,
|
||||
cidr_blocks=["0.0.0.0/0"],
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
# Create VPC endpoint for PRIVATE API Gateway
|
||||
vpce = aws.ec2.VpcEndpoint(f"vpce-{l_api['name']}",
|
||||
vpc_id=l_api["network_config"]["vpc_id"],
|
||||
service_name=f"com.amazonaws.{aws_region}.execute-api",
|
||||
subnet_ids=l_api["network_config"]["private_subnet_ids"],
|
||||
private_dns_enabled=False,
|
||||
vpc_endpoint_type="Interface",
|
||||
security_group_ids=[vpc_endpoint_sg]
|
||||
)
|
||||
vpce_id = vpce.id
|
||||
|
||||
# api = aws.apigateway.RestApi(f"api-{l_api["name"]}",
|
||||
# description=l_api["name"],
|
||||
# fail_on_warnings=False,
|
||||
# put_rest_api_mode='merge',
|
||||
# endpoint_configuration={
|
||||
# "types": api_config["type"],
|
||||
# "vpc_endpoint_ids": [vpce.id] if api_config["type"] == "PRIVATE" and api_config["create_and_allow_vpce"] else None
|
||||
# }
|
||||
# )
|
||||
|
||||
api = aws.apigateway.RestApi(f"api-{l_api["name"]}",
|
||||
description=l_api["name"],
|
||||
put_rest_api_mode='merge' if api_config["type"] == "PRIVATE" else 'overwrite',
|
||||
fail_on_warnings=False,
|
||||
endpoint_configuration={
|
||||
"types": api_config["type"],
|
||||
"vpc_endpoint_ids": [vpce_id] if api_config["type"] == "PRIVATE" and api_config["create_and_allow_vpce"] else None
|
||||
}
|
||||
)
|
||||
|
||||
# Build policy statements
|
||||
policy_outputs = []
|
||||
if api_config["allow_inbound_any"]:
|
||||
policy_outputs.append(
|
||||
pulumi.Output.all(aws_region, account_id, api.id, vpce_id).apply(
|
||||
lambda args: {
|
||||
"Effect": "Allow",
|
||||
"Principal": "*",
|
||||
"Action": "execute-api:Invoke",
|
||||
"Resource": f"arn:aws:execute-api:{args[0]}:{args[1]}:{args[2]}/*"
|
||||
}
|
||||
)
|
||||
)
|
||||
else:
|
||||
if api_config["type"] == "PRIVATE" and api_config["create_and_allow_vpce"]:
|
||||
policy_outputs.append(
|
||||
pulumi.Output.all(aws_region, account_id, api.id, vpce_id).apply(
|
||||
lambda args: {
|
||||
"Effect": "Allow",
|
||||
"Principal": "*",
|
||||
"Action": "execute-api:Invoke",
|
||||
"Resource": f"arn:aws:execute-api:{args[0]}:{args[1]}:{args[2]}/*",
|
||||
"Condition": {
|
||||
"StringEquals": {
|
||||
"aws:sourceVpce": args[3]
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
if api_config.get("allow_inbound_cidrs"):
|
||||
policy_outputs.append(
|
||||
pulumi.Output.all(aws_region, account_id, api.id).apply(
|
||||
lambda args: {
|
||||
"Effect": "Allow",
|
||||
"Principal": "*",
|
||||
"Action": "execute-api:Invoke",
|
||||
"Resource": f"arn:aws:execute-api:{args[0]}:{args[1]}:{args[2]}/*",
|
||||
"Condition": {
|
||||
"IpAddress": {
|
||||
"aws:SourceIp": api_config.get("allow_inbound_cidrs", [])
|
||||
}
|
||||
}
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
if len(policy_outputs) > 0:
|
||||
# Resource policy for private API
|
||||
resource_policy = aws.apigateway.RestApiPolicy(f"policy-{l_api['name']}",
|
||||
rest_api_id=api.id,
|
||||
policy=pulumi.Output.all(*policy_outputs).apply(
|
||||
lambda statements: json.dumps({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": statements
|
||||
})
|
||||
)
|
||||
)
|
||||
|
||||
# Create resources and methods dynamically
|
||||
resources = {}
|
||||
api_dependencies = []
|
||||
# routes = []
|
||||
|
||||
for route in api_config["routes"]:
|
||||
path = route["path"].strip("/")
|
||||
path_parts = path.split("/")
|
||||
|
||||
# Build nested resources
|
||||
parent_id = api.root_resource_id
|
||||
resource_path = ""
|
||||
|
||||
for part in path_parts:
|
||||
resource_path += f"/{part}"
|
||||
resource_key = resource_path
|
||||
|
||||
if resource_key not in resources:
|
||||
resources[resource_key] = aws.apigateway.Resource(
|
||||
f"resource-{l_api['name']}-{part}",
|
||||
rest_api=api.id,
|
||||
parent_id=parent_id,
|
||||
path_part=part
|
||||
)
|
||||
|
||||
parent_id = resources[resource_key].id
|
||||
|
||||
# Create method for this route
|
||||
method = aws.apigateway.Method(
|
||||
f"method-{l_api['name']}-{route['method']}-{path.replace('/', '-')}",
|
||||
rest_api=api.id,
|
||||
resource_id=parent_id,
|
||||
http_method=route["method"],
|
||||
authorization=api_config["authorization"]
|
||||
)
|
||||
|
||||
# Create integration
|
||||
integration = aws.apigateway.Integration(
|
||||
f"integration-{l_api['name']}-{route['method']}-{path.replace('/', '-')}",
|
||||
rest_api=api.id,
|
||||
resource_id=parent_id,
|
||||
http_method=method.http_method,
|
||||
integration_http_method="POST", # for Lambda integration, it's always POST
|
||||
type="AWS_PROXY",
|
||||
uri=fn.invoke_arn
|
||||
)
|
||||
|
||||
method_response = aws.apigateway.MethodResponse(f"methodResponse-{path.replace('/', '-')}",
|
||||
rest_api=api.id,
|
||||
resource_id=parent_id,
|
||||
http_method=method.http_method,
|
||||
status_code="200",
|
||||
response_models={"application/json": "Empty"}
|
||||
)
|
||||
|
||||
integration_response = aws.apigateway.IntegrationResponse(f"integrationResponse-{path.replace('/', '-')}",
|
||||
rest_api=api.id,
|
||||
resource_id=parent_id,
|
||||
http_method=method.http_method,
|
||||
status_code="200",
|
||||
selection_pattern="",
|
||||
response_templates={"application/json": ""},
|
||||
opts=pulumi.ResourceOptions(depends_on=[integration])
|
||||
)
|
||||
|
||||
# # Lambda permission for API Gateway
|
||||
# permission = aws.lambda_.Permission(
|
||||
# f"permission-{l_api['name']}-{route['method']}-{path.replace('/', '-')}",
|
||||
# action="lambda:InvokeFunction",
|
||||
# function=fn.name,
|
||||
# principal="apigateway.amazonaws.com",
|
||||
# source_arn=api.execution_arn.apply(
|
||||
# lambda arn, m=route['method'], r=resource_path: f"{arn}/*/{m}{r}"
|
||||
# )
|
||||
# )
|
||||
api_dependencies.append(method)
|
||||
api_dependencies.append(integration)
|
||||
api_dependencies.append(method_response)
|
||||
api_dependencies.append(integration_response)
|
||||
|
||||
# Lambda permission for API Gateway
|
||||
permission = aws.lambda_.Permission(
|
||||
f"permission-{l_api['name']}-general",
|
||||
action="lambda:InvokeFunction",
|
||||
function=fn.name,
|
||||
principal="apigateway.amazonaws.com",
|
||||
source_arn=api.execution_arn.apply(
|
||||
lambda arn: f"{arn}/*/*"
|
||||
)
|
||||
)
|
||||
api_dependencies.append(permission)
|
||||
|
||||
# method = getattr(apigateway.Method, route["method"])
|
||||
# routes.append(apigateway.RouteArgs(
|
||||
# path=route["path"],
|
||||
# method=method,
|
||||
# event_handler=fn
|
||||
# ))
|
||||
# api = apigateway.RestAPI("api",
|
||||
# routes=routes,
|
||||
# # type=api_config["type"],
|
||||
# # put_rest_api_mode="merge" if api_config["type"] == "PRIVATE" else "overwrite"
|
||||
# )
|
||||
|
||||
# # Create a VPC link to integrate API Gateway with the VPC
|
||||
# vpc_link = aws.apigateway.VpcLink(f"VpcLink-{l_api['name']}",
|
||||
# name=f"VpcLink-{l_api['name']}",
|
||||
# target_arn=l_api["network_config"]["vpc_id"],
|
||||
# tags={
|
||||
# "Name": f'VpcLink-{l_api["name"]}',
|
||||
# })
|
||||
|
||||
# Create a deployment for the API Gateway
|
||||
deployment = aws.apigateway.Deployment(f"deployment-{l_api['name']}",
|
||||
rest_api=api.id,
|
||||
# triggers={"redeployment": str(int(time.time()))},
|
||||
opts=pulumi.ResourceOptions(depends_on=list(resources.values())+api_dependencies)
|
||||
)
|
||||
|
||||
# Create a stage
|
||||
stage = aws.apigateway.Stage(f"stage-{l_api['name']}",
|
||||
deployment=deployment.id,
|
||||
rest_api=api.id,
|
||||
stage_name=api_config["stage_name"],
|
||||
opts=pulumi.ResourceOptions(depends_on=[deployment])
|
||||
)
|
||||
|
||||
if False: #use_api_key: #TODO
|
||||
api_key = aws.apigateway.ApiKey(api_gateway_config["name"],
|
||||
name=api_gateway_config["name"],
|
||||
description=api_gateway_config["description"],
|
||||
enabled=True
|
||||
)
|
||||
|
||||
# API Key to Stage
|
||||
usage_plan = aws.apigateway.UsagePlan("api-usage-plan",
|
||||
name=api_gateway_config["usage_plan_name"],
|
||||
description="Usage plan for API Gateway associated with API Key",
|
||||
api_stages=[aws.apigateway.UsagePlanApiStageArgs(
|
||||
api_id=api.id,
|
||||
stage=deployment.stage_name
|
||||
)]
|
||||
)
|
||||
|
||||
# API Key to Usage Plan
|
||||
aws.apigateway.UsagePlanKey("api-key-usage-plan-association",
|
||||
key_id=api_key.id,
|
||||
key_type="API_KEY",
|
||||
usage_plan_id=usage_plan.id
|
||||
)
|
||||
|
||||
|
||||
# Export the stage URL
|
||||
pulumi.export(f"{l_api['name']}-url", stage.invoke_url)
|
||||
4
label/infra/lambda_api_gateway/requirements.txt
Normal file
4
label/infra/lambda_api_gateway/requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
pulumi>=3.0.0,<4.0.0
|
||||
pulumi-aws>=7.0.0,<8.0.0
|
||||
pulumi-aws-apigateway>=3.0.0,<4.0.0
|
||||
pulumi-awsx>=3.0.0,<4.0.0
|
||||
Reference in New Issue
Block a user