Skip to content

Add Integration Tests #816

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

Add Integration Tests #816

wants to merge 10 commits into from

Conversation

Sneagan
Copy link
Collaborator

@Sneagan Sneagan commented Aug 7, 2025

  • Kafka token issuance
  • Kafka token redemption
  • V1 HTTP Issuer GET endpoint
  • V1 HTTP Token Redemption POST endpoint
  • V3 HTTP Token Redemption POST endpoint

This checks the entire token redemption flow for all known production
use cases as well as checks that duplicate redemtion fails. There are
some cases with time limited issuers where a token is issued for a
future issuer whose redemption fails. These tokens are tested and
verified to fail as expected.

@Sneagan Sneagan changed the title Feature/integration testing Add Integration Tests Aug 7, 2025
Copy link

socket-security bot commented Aug 8, 2025

All alerts resolved. Learn more about Socket for GitHub.

This PR previously contained dependency changes with security issues that have been resolved, removed, or ignored.

View full report

- Kafka token issuance
- Kafka token redemption
- V1 HTTP Issuer GET endpoint
- V1 HTTP Token Redemption POST endpoint
- V3 HTTP Token Redemption POST endpoint

This checks the entire token redemption flow for all known production
use cases as well as checks that duplicate redemtion fails. There are
some cases with time limited issuers where a token is issued for a
future issuer whose redemption fails. These tokens are tested and
verified to fail as expected.
@Sneagan Sneagan force-pushed the feature/integration-testing branch from abe7f82 to 12b7ac1 Compare August 12, 2025 02:05
@Sneagan Sneagan marked this pull request as ready for review August 12, 2025 02:18
mihaiplesa
mihaiplesa previously approved these changes Aug 12, 2025
@mihaiplesa mihaiplesa requested a review from Copilot August 12, 2025 07:12
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds comprehensive integration tests for the challenge-bypass system, including complete end-to-end flows for token issuance and redemption via Kafka and HTTP endpoints. The tests verify the entire system working together with all dependencies (PostgreSQL, Kafka, DynamoDB).

  • Added full integration test suite covering Kafka token flows and HTTP endpoints
  • Enhanced logging and debugging capabilities for development and testing
  • Updated build configuration and CI pipeline to support integration testing

Reviewed Changes

Copilot reviewed 14 out of 15 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
integration-tests/intergration_test.go New comprehensive integration test file with end-to-end token flows
docker-compose.integration.yml Test environment configuration with all required services
Dockerfile.test Test-specific container configuration for running integration tests
Makefile Added integration test commands and cleanup utilities
.github/workflows/integration-tests.yml CI workflow for automated integration testing
README.md Updated documentation with integration test instructions
utils/test/dynamodb.go Adjusted timeout for table activation in test environment
server/.go, kafka/.go Enhanced logging and debugging output for integration tests
go.mod Updated testify dependency to latest version

@@ -0,0 +1,1021 @@
//go:build integration
Copy link
Preview

Copilot AI Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filename contains a typo: 'intergration_test.go' should be 'integration_test.go' (missing 'a' in 'integration').

Copilot uses AI. Check for mistakes.

@@ -418,7 +431,7 @@ func avroRedeemErrorResultFromError(
Associated_data: []byte(message),
}
resultSet := avroSchema.RedeemResultSet{
Request_id: "",
Request_id: requestID,
Copy link
Preview

Copilot AI Aug 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original code had Request_id: "" but now uses requestID. However, looking at the function signature and context, the requestID parameter may not always be available or valid when this function is called in error scenarios. This change might introduce bugs where an empty or invalid requestID is used.

Copilot uses AI. Check for mistakes.

@@ -64,7 +64,7 @@ func SetupDynamodbTables(db *dynamodb.DynamoDB) error {
return fmt.Errorf("error creating dynamodb table")
}

err = tableIsActive(db, *input.TableName, time.Second, 10*time.Millisecond)
err = tableIsActive(db, *input.TableName, time.Second, 100*time.Millisecond)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: scope the error in the if

if err := tableIsActive(db, *input.TableName, time.Second, 100*time.Millisecond); err != nil {
    return fmt.Errorf("error table is not active %w", err)
}

@@ -62,7 +62,11 @@ func (c *Server) GetLatestIssuer(issuerType string, issuerCohort int16) (*model.
)
if err != nil {
if errors.Is(err, errIssuerCohortNotFound) {
c.Logger.Error("Issuer with given type and cohort not found")
c.Logger.Error(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: why are we logging here opposed to just returning errors and logging in the handlers/edges?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are many cases in the server code where we're logging in places where we should rely on the response logger. In this case, I added details for my own debugging. I'll remove this case because it shouldn't be here, but we should make a point to come back around to a lot of these logs later.

PublicKey *crypto.PublicKey `json:"public_key"`
ExpiresAt string `json:"expires_at,omitempty"`
Cohort int16 `json:"cohort"`
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: add line break between types

Duration string `json:"duration"`
Overlap int `json:"overlap"`
Buffer int `json:"buffer"`
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: add line break between types

//go:build integration
// +build integration

package integration
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: add new line between functions and types declarations

Not sure whats going on with the formatting but there is a mixture of new lines and no line break between functions and types.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's going on is my formatting plugin was failing and I didn't notice. Thanks. XD

expiresAt, err := time.Parse(time.RFC3339, issuerResp.ExpiresAt)
require.NoError(t, err, "failed to parse expires_at timestamp")

if expiresAt.After(time.Now()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: remove the extra logging and branching and just fail if there is no expires at

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotta disagree on this one. We have expiration, buffers, overlaps, and more involved in these tokens and debugging issues related to those values is a challenge. I'd rather be verbose in the logs and eat the millisecond cost of branching in the test to have clarity.


t.Log("Generating random tokens...")
// Number of tokens must be divisible by the sum of Buffer and Overlap of the Issuer
for i := range 2 {
Copy link
Collaborator

@clD11 clD11 Aug 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: pass the issuer into the function or surface the buffer and overlap

It is not clear how or where this value comes from or why it is set to two without having to really dig into the test setup. Also, if we want to change the number of token it will interfere with the other tests.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally right on this one. Thanks for calling this out.

- Verify the application correctly processes messages between Kafka topics
- Ensure proper communication between all services

#### Running Integration Tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: is there a way to run a single integration test?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add documentation about the go test -run argument to help make this clear.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To run that we need to bring up the containers and test environment so we would need to run make dev then from the shell run the go test -run [args] but the make dev doesn't not include the new docker integration file so I am wondering if that would work.

It would be good to have something where we can run a single test and pick up and code changes we are making locally, similar to bat-go.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your meaning. Good point will add.


t.Log("HTTP ISSUER GET ENDPOINT TEST PASSED")
}
func TestTokenIssuanceViaKafkaAndRedeemViaHTTPFlow(t *testing.T) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: break test into sub tests and make calls and assertions in the individual test

Its hard follow whats happening in these tests and there is a lot of coupling between issuing tokens and redeeming withing the test suite. For long term maintainability it might be easier to break the code up. For example,

	// create issuer
	
	t.Run("sign_tokens", func(t *testing.T) {
		// ...
	})

	t.Run("map_tokens", func(t *testing.T) {
		// ...
	})

	t.Run("redeem_v3_success", func(t *testing.T) {
		// make http redeem calls
		// assert
	})

	t.Run("redeem_v3_duplicate", func(t *testing.T) {
		// make http redeem calls
		// assert
	})

require.Equal(t, http.StatusConflict, resp2.StatusCode,
"Duplicate redemption should be blocked with 409 Conflict")
} else {
// Some failures are expected, especially for tokens with future keys
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: if the tests are not deterministic how do we know things are working?

It might be beneficial to move these calls into the specific tests and try and alleviate these issues.

TokenPreimage *crypto.TokenPreimage `json:"t"`
Signature *crypto.VerificationSignature `json:"signature"`
}
type blindedTokenRedeemResponse struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: can we remove this, looks like its unused

type blindedTokenRedeemResponse struct {
Cohort int16 `json:"cohort"`
}
type BlindedTokenIssueRequestV2 struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: can we remove this, looks like its unused

BlindedTokens []*crypto.BlindedToken `json:"blinded_tokens"`
IssuerCohort int16 `json:"cohort"`
}
type blindedTokenIssueResponse struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: can we remove this, looks like its unused

Copy link
Member

@kdenhartog kdenhartog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't see any issues with the postgres warnings and I see v3 was pinned

This LGTM from security perspective

Address PR comments regarding testing structure.
Copy link

[puLL-Merge] - brave-intl/challenge-bypass-server@816

Description

This PR adds comprehensive integration testing capabilities to the challenge-bypass-server project. It introduces a Docker-based integration test environment that spins up all required dependencies (PostgreSQL, Kafka, Zookeeper, LocalStack for DynamoDB emulation) and tests end-to-end flows including token issuance via Kafka, token redemption via both Kafka and HTTP endpoints, and proper handling of duplicate requests.

Security Hotspots

  1. Hardcoded Test Credentials (Medium Risk - Test Environment Only)

    • docker-compose.integration.yml contains hardcoded credentials (testuser:testpassword, AWS keys test:test)
    • While this is acceptable for test environments, ensure these credentials never leak to production configurations
  2. Network Exposure (Low Risk - Test Environment Only)

    • Multiple services expose ports (2416, 5432, 9092, etc.) in the test environment
    • This is expected for testing but should be isolated to test networks

Privacy Hotspots

  1. Test Data Logging (Low Risk)
    • Integration tests log various identifiers and request data
    • Test metadata includes user IDs which could be considered PII in some contexts
    • Logging is appropriately scoped to test environment only
Changes

Changes

.github/workflows/integration-tests.yml

  • New GitHub Actions workflow that runs integration tests on PR events
  • Uses pinned action version for security
  • Includes cleanup step that always runs

Dockerfile.test

  • New multi-stage Docker image for running integration tests
  • Builds Rust FFI library from specific git tag (1.0.1)
  • Installs PostgreSQL client and Go testing dependencies

Makefile

  • Adds comprehensive integration test targets (integration-test, integration-test-clean, integration-test-logs)
  • Updates docker-test target to exclude integration tests and set AWS pager
  • Provides detailed logging and error handling for test orchestration

README.md

  • Extensive documentation of integration testing capabilities
  • Clear instructions for running, debugging, and understanding the test suite
  • Explains test architecture and what flows are validated

docker-compose.integration.yml

  • Complete test environment configuration with health checks
  • Isolated test topics and databases
  • LocalStack configuration for DynamoDB emulation
  • Proper service dependencies and startup ordering

go.mod / go.sum

  • Updates testify dependency from v1.9.0 to v1.10.0

integration-tests/integration_test.go

  • Comprehensive integration test suite (986 lines)
  • Tests token issuance and redemption flows via both Kafka and HTTP
  • Validates duplicate detection, error handling, and end-to-end cryptographic operations
  • Includes proper setup/teardown and service readiness checks

kafka/main.go

  • Modifies Prometheus registry handling for test environments
  • Updates TLS dialer logic to handle test environment

kafka/signed_blinded_token_issuer_handler.go

  • Enhanced error logging with additional context for debugging

kafka/signed_token_redeem_handler.go

  • Improved error handling and logging
  • Fixed error result request ID assignment
  • Added debug logging for issuer matching

server/issuers.go

  • Removes redundant error logging for issuer not found cases

server/tokens.go

  • Adds debug logging for issuer information and token verification failures

utils/test/dynamodb.go

  • Increases retry interval for table activation checks (10ms → 100ms)
sequenceDiagram
    participant GHA as GitHub Actions
    participant Docker as Docker Compose
    participant Kafka as Kafka
    participant DB as PostgreSQL
    participant LS as LocalStack
    participant CBP as Challenge Bypass Server
    participant Tests as Integration Tests

    GHA->>Docker: make integration-test
    Docker->>DB: Start PostgreSQL
    Docker->>Kafka: Start Kafka + Zookeeper
    Docker->>LS: Start LocalStack (DynamoDB)
    Docker->>CBP: Start main application
    
    Note over Docker: Wait for all services healthy
    
    Docker->>Tests: Build and start test runner
    Tests->>CBP: Create test issuer (HTTP)
    Tests->>Kafka: Send token signing request
    CBP->>Kafka: Process signing request
    Kafka->>Tests: Return signed tokens
    Tests->>Kafka: Send redemption request
    CBP->>Kafka: Process redemption
    Kafka->>Tests: Return redemption result
    Tests->>CBP: Test HTTP redemption endpoints
    CBP->>Tests: Return redemption response
    Tests->>Tests: Validate duplicate detection
    
    Tests->>Docker: Tests complete
    Docker->>Docker: Cleanup all containers/volumes
    Docker->>GHA: Report results
Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants