Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • opencode-analyzer/data-provider
1 result
Show changes
Commits on Source (4)
...@@ -2,12 +2,34 @@ variables: ...@@ -2,12 +2,34 @@ variables:
GIT_SUBMODULE_STRATEGY: recursive GIT_SUBMODULE_STRATEGY: recursive
stages: stages:
- version_bump
- build - build
- test - test
- build_kubernetes_dev - build_kubernetes_dev
- build_kubernetes - build_kubernetes
- deploy - deploy
version_bump:
stage: version_bump
image: python:3
before_script:
- pip install bump-my-version
- git config --global user.email "no-reply@opencode.de"
- git config --global user.name "CICD Pipeline"
script:
# Determine the bump type based on commit messages
- |
bump_type="patch"
if git log $(git describe --tags --abbrev=0)..HEAD --grep="BREAKING CHANGE" --oneline | grep -q "BREAKING CHANGE"; then
bump_type="major"
elif git log $(git describe --tags --abbrev=0)..HEAD --grep="feat" --oneline | grep -q "feat"; then
bump_type="minor"
fi
- bump-my-version bump --commit --tag $bump_type VERSION
- git push --follow-tags https://git:$CI_PUSH_TOKEN@$CI_SERVER_HOST/$CI_PROJECT_PATH.git HEAD:$CI_COMMIT_REF_NAME
only:
- main
build: build:
image: image:
...@@ -55,17 +77,15 @@ deploy_development: ...@@ -55,17 +77,15 @@ deploy_development:
before_script: before_script:
- export KUBECONFIG=$KUBECONFIG_FILE - export KUBECONFIG=$KUBECONFIG_FILE
stage: deploy stage: deploy
only:
- dev
script: script:
- ./kubernetes/scripts/deploy.sh "dev" "${CI_COMMIT_SHORT_SHA}" "${CI_DEPLOY_USER} requested to deploy ${CI_COMMIT_BRANCH} ${CI_COMMIT_SHA}" - ./kubernetes/scripts/deploy.sh "dev" "${CI_COMMIT_SHORT_SHA}" "${CI_DEPLOY_USER} requested to deploy ${CI_COMMIT_BRANCH} ${CI_COMMIT_SHA}"
when: manual
deploy_production: deploy_production:
image: bitnami/kubectl:1.30-debian-12 image: bitnami/kubectl:1.30-debian-12
before_script: before_script:
- export KUBECONFIG=$KUBECONFIG_FILE - export KUBECONFIG=$KUBECONFIG_FILE
stage: deploy stage: deploy
only:
- main
script: script:
- ./kubernetes/scripts/deploy.sh "prod" "${CI_COMMIT_SHORT_SHA}" "${CI_DEPLOY_USER} requested to deploy ${CI_COMMIT_BRANCH} ${CI_COMMIT_SHA}" - ./kubernetes/scripts/deploy.sh "prod" "${CI_COMMIT_SHORT_SHA}" "${CI_DEPLOY_USER} requested to deploy ${CI_COMMIT_BRANCH} ${CI_COMMIT_SHA}"
when: manual
0.1.0
\ No newline at end of file
# Manual Triggering of Release Pipeline in GitLab # Deployment Process
## Decision: Manual Triggering of Deployment Pipeline in GitLab
- Status: Draft - Status: Draft
- Date: 2024-08-26 - Date: 2024-10-07
Technical Story: Technical Story:
https://gitlab.opencode.de/opencode-analyzer/management/-/issues/94 https://gitlab.opencode.de/opencode-analyzer/management/-/issues/94
## Context and Problem Statement #### Context and Problem Statement
Our current deployment process is not clearly defined, including the Our current deployment process is not clearly defined, including the
requirements that must be met to deploy a release to development or requirements that must be met to deploy a release to development or
...@@ -17,7 +19,7 @@ existing GitLab infrastructure, supports our requirements, and provides ...@@ -17,7 +19,7 @@ existing GitLab infrastructure, supports our requirements, and provides
flexibility to accommodate last-minute changes or withhold releases flexibility to accommodate last-minute changes or withhold releases
based on emerging issues. based on emerging issues.
## Considered Options #### Considered Options
- Automated Deployment Upon Commit/Merge/Release: While ensuring fast - Automated Deployment Upon Commit/Merge/Release: While ensuring fast
and consistent delivery of features and fixes, there is less control and consistent delivery of features and fixes, there is less control
...@@ -26,15 +28,15 @@ based on emerging issues. ...@@ -26,15 +28,15 @@ based on emerging issues.
flexibility and does not add value to our current structure and flexibility and does not add value to our current structure and
working style. working style.
## Decision Outcome #### Decision Outcome
We have decided to adopt manual triggering of the release pipeline in We have decided to adopt manual triggering of the deployment pipeline in
GitLab for deploying changes to development and production. This GitLab for deploying changes to development and production. This
increases control over what gets deployed and when. This minimizes the increases control over what gets deployed and when. This minimizes the
risk of errors and disruptions in our development process and for end risk of errors and disruptions in our development process and for end
users. users.
### Consequences ###### Consequences
- Enhanced Control where deployments can be timed to suit our needs in - Enhanced Control where deployments can be timed to suit our needs in
development and reducing the risk of impact on users. development and reducing the risk of impact on users.
...@@ -48,3 +50,23 @@ users. ...@@ -48,3 +50,23 @@ users.
including: including:
- Definition of steps required (e.g. passing the test suite) - Definition of steps required (e.g. passing the test suite)
- Preparing the environment for deployment via pipelines - Preparing the environment for deployment via pipelines
## Implementation
The GitLab CI/CD Pipelines are configured to require manual intervention
for deployment steps to development and production environments. Hence,
the decision when to deploy to either environment is made by a developer
or maintainer of the project.
For the development environment it is crucial to not deploy during
testing on the environment.
For the production environment a deployment should only be done after:
- the build and test steps finished successfully,
- the development deployment finished successfully and
- smoke testing or end-to-end testing on the development environment
was successful.
Deployments are done from the GitLab registry of the projects using the
latest artifacts.
# Development Guideline
This document serves as a continuation of the [setup and install guide](install_and_setup.md) and a general guideline
for developers who wish to contribute to the Data Provider codebase. For starters, we highly recommend you to read the [architecture
documentation](architecture.md) to get a better understanding on the application and interaction between components
inside the OpenCoDE Analyzer. Please also refer to the [OpenAPI Docs](https://dev-kpi.o4oe.de/api/webjars/swagger-ui/index.html#/).
## Table of Contents
- [Project Structure](#project-structure)
- [Development Workflow](#development-workflow)
- [Adding Tools to the Pipeline](#adding-tools-to-the-pipeline)
- [Evaluating an OpenCoDe Project](#evaluating-an-opencode-project)
## Project Structure
#### Layout
The general structure of the code repository is shown as follows:
```
data-provider
├── app/backend/src # Data-provider backend code
│ ├── main/ # Backend logic code
│ └── test/ # Tests
├── doc/ # Documentation
├── kubernetes/ # Kubernetes config files for deployment
├── software-product-health-analyzer/ # (Submodule) Core library for KPI calculation
├── .env # Required environment variables
├── .gitlab-ci.yml # CI configuration, including deployment
├── Dockerfile # Image blueprint for data-provider
├── build.gradle.kts # Contains gradle tasks
├── docker-compose.yml # Mainly used for local runs
├── publiccode.yml # OpenCoDE Project specific config
├── settings.gradle.kts # Required for composite build
└── venv # Helper script for devs
```
#### Build Tools
We use [Gradle](https://docs.gradle.org/current/userguide/userguide.html) as our build tool.
The project uses a [composite build](https://docs.gradle.org/current/userguide/composite_builds.html) structure.
It consists of the backend app and the KPI calculation core ([SPHA](https://github.com/fraunhofer-iem/software-product-health-analyzer)).
For dependency management we use the [gradle version catalogs](https://docs.gradle.org/current/userguide/platforms.html#sub:version-catalog).
Where the catalog is located in `app/gradle/libs.version.toml`. To add a new plugin, please refer to the [official
guide](https://docs.gradle.org/current/userguide/platforms.html#sub:conventional-dependencies-toml).
#### Backend
The Data Provider backend component is a Spring Boot application written in Kotlin.
It's structure follows the [recommended layout](https://docs.spring.io/spring-boot/reference/using/structuring-your-code.html).
It uses [Spring Webflux](https://docs.spring.io/spring-framework/reference/web/webflux.html) for the reactive web framework.
#### Configuration Files
Helper scripts as well as [spring properties](https://docs.spring.io/spring-boot/how-to/properties-and-configuration.html)
files are located in the `app/backend/src/main/resources/` folder.
#### Testing Setup
A detailed explanation regarding tests is written in the [testing concept documentation](testing_concept.md).
We run tests via a [Gradle task](https://docs.gradle.org/current/userguide/tutorial_using_tasks.html) `./gradlew test`.
By default, the test runs all tests and generates a report file as an artifact.
Test scripts are located in the `src/test/` folder and also contains the profile properties for testing environment.
The test folders are sorted to represent their original package path in the `src/main/` folder.
We use [Jacoco](https://www.eclemma.org/jacoco/) to generate the test reports.
#### External Tools
For our local setup, we run the pipeline tools locally and the tools are located in `app/backend/tools`.
Currently, it contains the database and OCCMD tool. In production, these tools run in the pipeline.
The compose files of each tool are referred in the main compose file and are usually executed together from there.
Please check the tasks in `build.gradle.kts` for more info on how it is executed via Gradle.
#### Artifacts
Generally, we can directly read the [GitLab CI YAML file](../.gitlab-ci.yml) to know how build artifacts are produced.
For reference on how to read the file, please refer to the [official documentation](https://docs.gitlab.com/ee/ci/yaml/index.html).
The image artifacts are created for every push to the dev and main branch and are saved in the project's container registry.
Test reports are created for every push to all branch and are accessible through the GitLab UI in the build artifacts section.
We can also see the test coverage report in our merge request directly.
## Development Workflow
#### Issues and Boards
We recommend that you go to the [OpenCoDE Analyzer group page](https://gitlab.opencode.de/opencode-analyzer) and access the issue board from there. The board visualizes
the development workflow following agile methodology. You will be able to see all the issues related to the OpenCoDE and not just the specific component. When making an issue, you should include a clear description, problem statement, acceptance criteria, relevant attachments, and labels.
#### Working on an Issue
Our project has two major branches: **main** and **dev**. Both branches are
[protected](https://docs.gitlab.com/ee/user/project/protected_branches.html). The **main** branch represents production-ready
code, while the **dev** branch is used for ongoing development. To begin working on an issue, create a new branch from
the **dev** branch. For clarity, the new branch name should represent the issue it is trying to solve (i.e. `doc/development-guide`).
#### Creating Merge Request
After creating an issue, create a [Merge Request](https://docs.gitlab.com/ee/user/project/merge_requests/) (MR) either through
the GitLab interface or through [CLI](https://docs.gitlab.com/ee/topics/git/commit.html#formats-for-push-options).
In the description, provide a thorough explanation of what the MR contains, what changes it will bring, or what it fixes.
A good description will help the review process. Please also link any related issue in the description for better visibility.
#### Commit Guideline
To maintain a clean and navigable git history, we follow the [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/)
specification, which uses a structured format for commit messages. Each commit message should follow the template: `type: subject`.
Here are the types we use:
- fix: Bug fixes
- feat: New features
- build: Changes affecting the build system or external dependencies
- ci: Changes to our CI configuration files and scripts
- test: Adding or updating tests
- chore: Non-feature related changes to the code, such as renaming or moving functions within the same file
- doc: Documentation changes
- refactor: Code changes that neither fix a bug nor add a feature
- dep: Dependency updates
#### Development
We use [ktfmt](https://github.com/facebook/ktfmt?tab=readme-ov-file) with `kotlinlang` style to format our code.
Before committing, we should run `./gradlew ktfmtFormat` so the changes complies with the code style.
We can also install the [IntelliJ plugin](https://plugins.jetbrains.com/plugin/14912-ktfmt) and configure it to
`kotlinlang` under the plugin settings. Before pushing, we should also run `./gradlew test` to check if any tests breaks.
Even though both of the Gradle tasks runs automatically in the CI pipeline, running the tasks locally before pushing to
remote branch would save time and effort.
#### Review Process
Once a merge request is ready for review, remove the draft status, make sure all pipeline passes and no merge conflicts exists.
Then, assign another developer as a reviewer. Before merging, address and resolve all review comments.
## Adding Tools to the Pipeline
Implementing a new tool is not straightforward, as it involves other components and the platform team.
We assume that you have already read the previous mentioned documentations and have run the Analyzer locally.
Generally, it requires the developer to:
1. Make a standalone tool
2. Update [SPHA](https://github.com/fraunhofer-iem/software-product-health-analyzer/blob/main/doc/SPHA.md)
3. Update [data-provider](https://gitlab.opencode.de/opencode-analyzer/data-provider)
4. Verify that everything works
There is no language restriction for the tool itself, but the SPHA and the data-provider is written in Kotlin.
In the following sections, we will break down each of these steps in detail.
#### Making the tool
As mentioned in the [architecture docs](architecture.md), the platform team is responsible for running the tools in the pipeline.
Hence, to add a new tool to the pipeline, we need to provide them with a container image.
A general checklist for making a new tool is:
- The tool must run as a container.
- The tool input must be the project ID.
- The tool output must be written in stdout and in [JSON format](https://www.json.org/json-en.html).
- The image must not require root privilege and root user (i.e. the container must be able to function in rootless mode).
- The image must be stored in an image registry (i.e. in OpenCoDE GitLab Container Registry) and the link must be provided to the platform team.
- Any configuration files, secrets, volumes, exposed ports, ingress and egress traffic must be provided to the platform team.
#### Update SPHA
Before making any changes to the Software Product Health Analyzer (SPHA) library, please first read the [contribution guideline](https://github.com/fraunhofer-iem/software-product-health-analyzer/blob/main/CONTRIBUTING.md) and [documentation](https://github.com/fraunhofer-iem/software-product-health-analyzer/blob/main/doc/SPHA.md).
This [pull request](https://github.com/fraunhofer-iem/software-product-health-analyzer/pull/15) is an example of a tool implementation in SPHA.
After that, the following steps are needed:
1. Make a new tool adapter in `adapter/.../adapter/tools/`.
2. Make your tool's [DTO](https://martinfowler.com/eaaCatalog/dataTransferObject.html) in `model/.../model/adapter/`.
3. (If needed) Make any new KPI hierarchy and calculation strategies in `core/.../core/strategies`.
4. Write the corresponding unit tests.
#### Update Data Provider
For the Data Provider, we need define a new service to get and process the tool result from the tool pipeline API.
This [merge request](https://gitlab.opencode.de/opencode-analyzer/data-provider/-/merge_requests/37) is an example of a
tool implementation in Data Provider.
These are the steps to integrate your tool in data-provider:
- Update the SPHA submodule in data-provider to your commit id
- Add tool API endpoint
- Add `opencode.api.<tool-name>=<your-tool-endpoint>` to the `application.properties`
- The exact tool endpoint might be decided by the platform team, for now just use any
- Add a variable in `app/backend/.../app/configuration/OpenCodeApiProperties.kt`
- Use the `ort` as an example
- Implement your new tool service and its data class in `app/backend/.../app/tools`
- The tool service is for querying the tool API
- The data class is to deserialize the JSON response
- Update `app/backend/.../app/tool/service/ToolService.kt`
- Import your tool service
- Add tool in `createAllTools()` method
- Update `app/backend/.../app/tool/service/ToolRunService.kt`
- Import your tool service and SPHA adapter
- Add tool service as private variables
- Add a new async call for your tool in `apiJobs` in `createToolRunForRepository()` method
- Write the corresponding unit tests for the tool service
#### Verifying the tool
To verify whether all the changes work, we need to run all components together.
For this, we need to use the [mock tool API server](https://gitlab.opencode.de/opencode-analyzer/mock-tool-api-server)
to simulate the platform pipeline.
The goal is to test the new tool and the changes made to SPHA and Data Provider.
To start testing, we need to run the data-provider, dashboard, and mock tool API.
If all logs looks fine and your tool result is shown in the dashboard, then your changes are successful.
What is left now is to hand over the tool to the platform team. Once they have integrated the new tool into the pipeline,
we can deploy the Data Provider to the dev and (eventually) prod cluster.
## Evaluating an OpenCoDE Project
In the [installation and setup guide](install_and_setup.md), we have run the OpenCoDE analyzer locally. \
By default, it evaluates the sample projects in `.env`. To evaluate a custom project:
1. Get project ID from the triple dot icon in the top riht corner of the project's repository page.
- We use two GitLab instances, one for [dev](https://gitlab.dev.o4oe.de/) and for [prod](https://gitlab.opencode.de/).
2. Overwrite the `PROJECT_IDS` variable in `.env`.
3. Adjust `opencode.host` for the local profile in `application.properties` to the appropriate GitLab address.
4. Make sure database and dashboard is up. Or run with `./gradlew run-db run-dashboard`.
5. Run the data-provider `./gradlew run`.
6. Access the dashboard via a browser at `localhost:5000` and the custom project should appear there.
# Implementation of Semantic Versioning and Git Tagging for Release Management # Release Process
- Status: Draft ## Descision: Implementation of Semantic Versioning and Git Tagging for Release Management
- Date: 2024-08-26
Technical Story: - Status: Implemented
https://gitlab.opencode.de/opencode-analyzer/management/-/issues/94 - Date: 2024-10-07
## Context and Problem Statement Technical Story: [View on
OpenCoDE](https://gitlab.opencode.de/opencode-analyzer/management/-/issues/94)
#### Context and Problem Statement
Our current release management process lacks a standardized method for Our current release management process lacks a standardized method for
versioning and tracking changes across releases, making release versioning and tracking changes across releases, making release
...@@ -15,27 +17,27 @@ rollback releases and complicates communication about changes and ...@@ -15,27 +17,27 @@ rollback releases and complicates communication about changes and
features included in each release. Further, it complicates compatibility features included in each release. Further, it complicates compatibility
tracking between components (mainly `DatProvider` and `dashboard`). tracking between components (mainly `DatProvider` and `dashboard`).
## Considered Options #### Considered Options
- Calendar Versioning (CalVer): Calendar Versioning is a convention - Calendar Versioning (CalVer): Calendar Versioning is a convention
based on dates for release versioning used by popular projects like based on dates for release versioning used by popular projects like
Ubuntu. However, there is no clear reason to use it in our context Ubuntu. However, there is no clear reason to use it in our context
based on ("When to use based on ["When to use
CalVer")\[https://calver.org/#when-to-use-calver\]. CalVer](https://calver.org/##when-to-use-calver).
- Continuous Releases based on commit IDs: Using commit id's for - Continuous Releases based on commit IDs: Using commit id's for
release management is a viable path on rapidly changing projects release management is a viable path on rapidly changing projects
with the ambition to be able to roll-out any and every commit. with the ambition to be able to roll-out any and every commit.
However, in our context we prefer more control over the release However, in our context we prefer more control over the release
process and compatibility between the components. process and compatibility between the components.
## Decision Outcome #### Decision Outcome
We will adopt Semantic Versioning for our software releases and use Git We will adopt Semantic Versioning for our software releases and use Git
tagging to mark these releases in our version control system. This tagging to mark these releases in our version control system. This
approach will standardize our release process, providing clear, approach will standardize our release process, providing clear,
version-controlled, and easily identifiable releases. version-controlled, and easily identifiable releases.
### Consequences ###### Consequences
- Introduces a systematic approach to versioning that is widely - Introduces a systematic approach to versioning that is widely
recognized and understood. recognized and understood.
...@@ -60,3 +62,27 @@ version-controlled, and easily identifiable releases. ...@@ -60,3 +62,27 @@ version-controlled, and easily identifiable releases.
- Modification of existing software and build process to include - Modification of existing software and build process to include
the version from the VERSION file on API route or in the the version from the VERSION file on API route or in the
frontend. frontend.
## Implementation
In order to use Semantic Versioning and Git Tags for release management
we selected [Bump My
Version](https://github.com/callowayproject/bump-my-version).
The tool provides the functionality to:
- read the current version from the `VERSION`file in the repository
- bump the version according to major, minor, or patch increments
- create a Git tag and versioning commit to reflect the current
version
The tool is run in the CI/CD Pipeline, whenever a merge to main is
conducted. In order to decide between a major, minor or patch increment
the Pipeline includes a script that analyses the commit messages for:
- `BREAKING CHANGE` to indicate a major version bump
- `feat` to indicate a minor version bump
and defaults to a patch version, when neither keywords are detected.
Thus it is important to use the correct keywords in the merge commit, if
the branch commits are squashed.
# Troubleshooting guide # Troubleshooting guide
## dashboard This guide assists developers in diagnosing and resolving issues that may arise during development.
It is assumed that the reader has read the [development guide](/doc/development_guide.md) in advance.
This document may be used as the first place to search for error keywords or common errors during development.
- `unexpected error` However, this does **not** cover the following topics:
- open Web Developer Tools _Ctrl + Shift + I_
- Are there errors in Inspector?
- Are there errors in Network Tab?
## data-provider - Kubernetes related guide. It is included in the [administration tasks guide](administration_tasks.md).
- Network related or infrastructure issues. Please contact [the platform team](mailto:dcsupport@wx1.de).
- IDE specific issues. Users are responsible for their own environment.
- Bugs found during development. Known bugs should be documented and reported as a separate [issue](https://gitlab.opencode.de/groups/opencode-analyzer/-/issues).
## kubernetes We encourage readers to update this guide for general troubleshooting tips and future common issues.
- check dev deployment? ## Pipeline
- have a look at: https://dev-kpi.o4oe.de/
- logs: ### Checking Logs
Several ways to read job logs via the GitLab UI:
- Download the CI job artifacts: Build > Artifacts > Expand and download the file under the artifact column.
- On every pushed commits: The circle icon near the commit SHA > Stages column.
The GitLab pipeline guide with pictures are shown [here.](https://docs.gitlab.com/ee/ci/pipelines/#view-pipelines)
### Fixing Failing Pipelines
To find the cause of a failing pipeline, check the pipeline log for the failing stage.
- For failing tests, we can also look directly via the merge request to get a cleaner overview of the exact test.
- For deployment failures, we should verify whether the CI variables such as tokens are up-to-date.
- To change CI variables, navigate to Settings > CI/CD > Expand the variables
On rare occasions however, a simple restart of the pipeline could solve the failure.
A more in-depth guide for debugging pipelines are detailed [here.](https://docs.gitlab.com/ee/ci/debugging.html)
## Kubernetes & Rancher
The [administration guide](/doc/administration_tasks.md#what-you-need-to-know-about-kubernetes) covers common
commands and tasks to interact with the dev and prod cluster which includes attaching to containers, reading logs, and
copying files.
## Common Issues
In this section, we provide a list of common problems and errors encountered during development.
### dashboard
- Page shows "Something went wrong..."
- open Web Developer Tools _Ctrl + Shift + I_
- Are there errors in Inspector?
- Are there errors in Network Tab?
- Are there errors in Console?
- Missing KPI scores (0/100)
- This is due to data-provider not receiving some tool result data
- Check data-provider logs
- Does the ORT API return any results?
- Are the OCCMD scan successful?
- Web console shows `CORS preflight fail ...`
- The error comes from dashboard address not whitelisted by data-provider
- There is the `CORS_ORIGIN` environment variable which white lists sites.
### data-provider
- `Unauthorized 401`
- Check `.env`
- Is the GitLab token still valid?
- Are you connecting to the right address? (i.e. to prod instead of dev)
- Check your current profile and also the `application.properties`
- `Consider defining a bean of type 'org.springdoc.webmvc.ui.SwaggerWelcomeCommon' ...`
- This happens when the management port is not set
- `Binding to target ... failed`
- Check the `application.properties` regarding the mentioned property
- `java.io.IOException: Creating directories for /.config/jgit failed`
- Is the `XDG_CONFIG_HOME` environment variable set?
- `Unable to commit against JDBC Connection`
- Is the database healthy?
- Usually this error goes away by resetting the db and its docker volume

Consent

On this website, we use the web analytics service Matomo to analyze and review the use of our website. Through the collected statistics, we can improve our offerings and make them more appealing for you. Here, you can decide whether to allow us to process your data and set corresponding cookies for these purposes, in addition to technically necessary cookies. Further information on data protection—especially regarding "cookies" and "Matomo"—can be found in our privacy policy. You can withdraw your consent at any time.