Feature ideas

We take your ideas seriously! Read more on our prioritization process in our blog https://productmanagement.port.io/posts/managing-feature-ideas
Add Multi-Branch Support for all GitOps Providers
For mature, established git repositories, getting port.yml changes to main can be a challenge. There needs to be a way for developers to preview their port.yml changes in the Port.io Non-Prod organization by having it look for port.yml changes in a lower branch such as feature/port-gitops. This allows them to preview changes without going through a PR process. I currently have this working with GitHub Ocean (ADO currently does not support branches in the same way). But there are quirks to the process. Requires separate mappings for dev and prod (more tooling with Pulumi) Dev mappings require defining the mappings for the default branch first, followed by the override mappings with a configured and maintained list of repos and the feature/port-gitops. The duality causes the main branch changes to update and then the preview branch to update again. Next sync the same cycle happens over and over again. Instead, if the lower branch exists, it should ignore main. I cannot just set the global branch in the dev org because that requires persisted branches in repos which will not fly. kind: file port: entity: mappings: blueprint: '"environment"' icon: if .this.icon then .this.icon else "Environment" end identifier: .this.identifier properties: description: . this.properties .description platform_id: . this.properties .platform_id relations: {} team: if . this.team then . this.team else [] end title: .this.title itemsToParse: .content | if type == "array" then . else [.] end | map(select(.blueprint == "environment")) itemsToParseName: this selector: files: - path: port.yml query: 'true' Followed by a copy with this added. selector: files: - path: port.yml repos: - branch: feature/port-gitops name: poc-port-ocean - branch: feature/port-gitops name: platform-identity
0
·
Data sources
Make CronJob backoffLimit configurable in port-ocean Helm chart
The port-ocean Helm chart hardcodes backoffLimit: 0 in the CronJob job template (cron.yaml L28). This value is not exposed as a configurable Helm value. In Kubernetes environments with node autoscalers (Karpenter, Cluster Autoscaler), pods can be evicted at any time due to node consolidation or scale-down events. When a resync pod is evicted mid-execution, it exits non-zero, and with backoffLimit: 0 the entire Job is marked as permanently failed — no retry is attempted. This makes the self-hosted CronJob integration inherently fragile on autoscaled clusters, which represent the majority of production EKS/GKE/AKS deployments. Requested change: Expose backoffLimit as a configurable Helm value under workload.cron, e.g.: workload: cron: backoffLimit: 3 # default: 0 (current behavior, for backward compat) And in cron.yaml: backoffLimit: {{ .Values.workload.cron.backoffLimit | default 0 }} Why this matters: backoffLimit: 0 means any transient pod failure (node eviction, OOM kill, spot interruption, network blip during image pull) permanently fails the Job. Users currently cannot work around this without either (a) adding karpenter.sh/do-not-disrupt annotations — which only covers Karpenter and doesn't help with spot interruptions or OOM — or (b) forking the chart. A modest default like 3 would allow Kubernetes to retry the resync pod automatically while still bounding runaway retries. The existing activeDeadlineSeconds already provides a time-based safety net
0
·
Data sources
Add new vulnerability information in Wiz Ocean integration
We want to have additional properties in the Wiz Ocean integration We want to change the mapping and return it for doing the proper mapping in the integration. * we would like to have vulnerableAsset information for all the VulnerableAssetsTypes . A full example of VulnerableAsset for ContainerImages and a small subset for the rest of the other types are seen in the code snippet below: query VulnerabilityFindingsTable( $filterBy: VulnerabilityFindingFilters $first: Int $after: String $orderBy: VulnerabilityFindingOrder ) { vulnerabilityFindings( filterBy: $filterBy first: $first after: $after orderBy: $orderBy ) { nodes { id severity categories version detectionMethod score status description resolvedAt updatedAt firstDetectedAt publishedDate remediation environments link vulnerabilityExternalId portalUrl origin CVEDescription name detailedName artifactType { group ciComponent custom plugin osPackageManager codeLibraryLanguage } hasFix hasExploit isHighProfileThreat projects { id name } rootComponent { name } applicationServices { id } vulnerableAsset { ... on VulnerableAssetBase { id type name cloudPlatform subscriptionName subscriptionExternalId nativeType } ... on VulnerableAssetVirtualMachine { id type name cloudPlatform operatingSystem nativeType } ... on VulnerableAssetContainerImage { Core identification fields id type name cloudPlatform Subscription/Account information subscriptionName subscriptionExternalId subscriptionId Resource metadata tags nativeType Network exposure fields hasLimitedInternetExposure hasWideInternetExposure isAccessibleFromVPN isAccessibleFromOtherVnets isAccessibleFromOtherSubscriptions Container-specific fields repository { name } registry { name } scanSource Execution context (where the image is running) executionControllers { id entityType externalId providerUniqueId name subscriptionExternalId subscriptionId subscriptionName ancestors { id name entityType externalId providerUniqueId } } Additional fields from Splunk integration imageId region providerUniqueId cloudProviderURL status } ... on VulnerableAssetContainer { id type name cloudPlatform nativeType } ... on VulnerableAssetServerless { id type name cloudPlatform nativeType } ... on VulnerableAssetRepositoryBranch { id type name cloudPlatform repositoryId repositoryName nativeType } } } pageInfo { hasNextPage endCursor } } } This will allow us to get the additional information we need in our catalog.
0
·
Data sources
Load More