New Relic offers a generous free tier of 100GB data ingestion. But with default 30s scrape duration, it can get used up pretty fast. You can change the scrape duration by editing the configuration as described in this article.

Get existing configuration:

## kubectl get cm nri-bundle-nri-prometheus-config -o yaml -n newrelic
apiVersion: v1
data:
  config.yaml: |
    cluster_name: civo
    audit: false
    insecure_skip_verify: false
    require_scrape_enabled_label_for_nodes: true
    scrape_enabled_label: prometheus.io/scrape
    scrape_endpoints: false
    scrape_services: true
    transformations: []
    verbose: false
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/name: nri-prometheus
    app.kubernetes.io/version: 2.12.0
    helm.sh/chart: nri-prometheus-1.13.1
  name: nri-bundle-nri-prometheus-config
  namespace: newrelic

Add scrape_duration key to above config as follows

apiVersion: v1
data:
  config.yaml: |
    cluster_name: civo
    audit: false
    insecure_skip_verify: false
    require_scrape_enabled_label_for_nodes: true
    scrape_enabled_label: prometheus.io/scrape
    scrape_endpoints: false
    scrape_services: true
    transformations: []
    verbose: false
    scrape_duration: "300s"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/instance: nri-bundle
    app.kubernetes.io/name: nri-prometheus
    app.kubernetes.io/version: 2.12.0
    helm.sh/chart: nri-prometheus-1.13.1
  name: nri-bundle-nri-prometheus-config
  namespace: newrelic

`