improve(self-hosting): remove goose, custom migration, docs, remove zookeeper
This commit is contained in:
15
ROADMAP.md
15
ROADMAP.md
@@ -1,15 +0,0 @@
|
|||||||
# Roadmap
|
|
||||||
|
|
||||||
## Simple todos
|
|
||||||
|
|
||||||
- [ ] add session_id on events table, link this id on create
|
|
||||||
- [ ] add overview page containing
|
|
||||||
- [x] User histogram (last 30 minutes)
|
|
||||||
- [ ] Bounce rate
|
|
||||||
- [ ] Session duration
|
|
||||||
- [ ] Views per session
|
|
||||||
- [ ] Unique users
|
|
||||||
- [ ] Total users
|
|
||||||
- [ ] Total pageviews
|
|
||||||
- [ ] Total events
|
|
||||||
- [ ]
|
|
||||||
@@ -15,10 +15,6 @@ RUN corepack enable && apt-get update && \
|
|||||||
&& apt-get clean && \
|
&& apt-get clean && \
|
||||||
rm -rf /var/lib/apt/lists/*
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
RUN curl -fsSL \
|
|
||||||
https://raw.githubusercontent.com/pressly/goose/master/install.sh |\
|
|
||||||
sh
|
|
||||||
|
|
||||||
ARG DATABASE_URL
|
ARG DATABASE_URL
|
||||||
ENV DATABASE_URL=$DATABASE_URL
|
ENV DATABASE_URL=$DATABASE_URL
|
||||||
ENV PNPM_HOME="/pnpm"
|
ENV PNPM_HOME="/pnpm"
|
||||||
|
|||||||
@@ -1,14 +1,75 @@
|
|||||||
---
|
---
|
||||||
title: Introduction to OpenPanel
|
title: Introduction to OpenPanel
|
||||||
description: The OpenPanel SDKs provide a set of core methods that allow you to track events, identify users, and more. Here's an overview of the key methods available in the SDKs.
|
description: Get started with OpenPanel's powerful analytics platform that combines the best of product and web analytics in one simple solution.
|
||||||
---
|
---
|
||||||
|
|
||||||
<Callout>
|
<Callout>
|
||||||
While all OpenPanel SDKs share a common set of core methods, some may have
|
OpenPanel is currently in beta and free to use. We're constantly improving our
|
||||||
syntax variations or additional methods specific to their environment. This
|
platform based on user feedback.
|
||||||
documentation provides an overview of the base methods and available SDKs.
|
|
||||||
</Callout>
|
</Callout>
|
||||||
|
|
||||||
|
## What is OpenPanel?
|
||||||
|
|
||||||
|
OpenPanel is an open-source analytics platform that combines product analytics (like Mixpanel) with web analytics (like Plausible) into one simple solution. Whether you're tracking website visitors or analyzing user behavior in your app, OpenPanel provides the insights you need without the complexity.
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
|
||||||
|
### Web Analytics
|
||||||
|
- **Real-time data**: See visitor activity as it happens
|
||||||
|
- **Traffic sources**: Understand where your visitors come from
|
||||||
|
- **Geographic insights**: Track visitor locations and trends
|
||||||
|
- **Device analytics**: Monitor usage across different devices
|
||||||
|
- **Page performance**: Analyze your most visited pages
|
||||||
|
|
||||||
|
### Product Analytics
|
||||||
|
- **Event tracking**: Monitor user actions and interactions
|
||||||
|
- **User profiles**: Build detailed user journey insights
|
||||||
|
- **Funnels**: Analyze conversion paths
|
||||||
|
- **Retention**: Track user engagement over time
|
||||||
|
- **Custom properties**: Add context to your events
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
1. **Installation**: Choose your preferred method:
|
||||||
|
- [Script tag](/docs/sdks/script) - Quickest way to get started
|
||||||
|
- [Web SDK](/docs/sdks/web) - For more control and TypeScript support
|
||||||
|
- [React](/docs/sdks/react) - Native React integration
|
||||||
|
- [Next.js](/docs/sdks/nextjs) - Optimized for Next.js apps
|
||||||
|
|
||||||
|
2. **Core Methods**:
|
||||||
|
```js
|
||||||
|
// Track an event
|
||||||
|
track('button_clicked', {
|
||||||
|
buttonId: 'signup',
|
||||||
|
location: 'header'
|
||||||
|
});
|
||||||
|
|
||||||
|
// Identify a user
|
||||||
|
identify({
|
||||||
|
profileId: 'user123',
|
||||||
|
email: 'user@example.com',
|
||||||
|
firstName: 'John'
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
## Privacy First
|
||||||
|
|
||||||
|
OpenPanel is built with privacy in mind:
|
||||||
|
- No cookies required
|
||||||
|
- GDPR and CCPA compliant
|
||||||
|
- Self-hosting option available
|
||||||
|
- Full control over your data
|
||||||
|
|
||||||
|
## Open Source
|
||||||
|
|
||||||
|
OpenPanel is fully open-source and available on [GitHub](https://github.com/Openpanel-dev/openpanel). We believe in transparency and community-driven development.
|
||||||
|
|
||||||
|
## Need Help?
|
||||||
|
|
||||||
|
- Join our [Discord community](https://discord.gg/openpanel)
|
||||||
|
- Check our [GitHub issues](https://github.com/Openpanel-dev/openpanel/issues)
|
||||||
|
- Email us at [hello@openpanel.dev](mailto:hello@openpanel.dev)
|
||||||
|
|
||||||
## Core Methods
|
## Core Methods
|
||||||
|
|
||||||
### Set global properties
|
### Set global properties
|
||||||
|
|||||||
23
apps/public/content/docs/self-hosting/changelog.mdx
Normal file
23
apps/public/content/docs/self-hosting/changelog.mdx
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
---
|
||||||
|
title: Changelog for self-hosting
|
||||||
|
description: This is a list of changes that have been made to the self-hosting setup.
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1.0.0 (stable)
|
||||||
|
|
||||||
|
OpenPanel self-hosting is now in a stable state and should not be any breaking changes in the future.
|
||||||
|
|
||||||
|
If you are upgrading from a previous version, you should keep an eye on the logs since it well tell you if you need to take any actions. Its not mandatory but its recommended since it might bite you in the *ss later.
|
||||||
|
|
||||||
|
### New environment variables.
|
||||||
|
|
||||||
|
<Callout>
|
||||||
|
If you upgrading from a previous version, you'll need to edit your `.env` file if you want to use these new variables.
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
- `ALLOW_REGISTRATION` - If set to `false` new users will not be able to register (only the first user can register).
|
||||||
|
- `ALLOW_INVITATION` - If set to `false` new users will not be able to be invited.
|
||||||
|
|
||||||
|
## 0.0.6
|
||||||
|
|
||||||
|
Removed Clerk.com and added self-hosted authentication.
|
||||||
@@ -1,5 +1,9 @@
|
|||||||
{
|
{
|
||||||
"title": "Self-hosting",
|
"title": "Self-hosting",
|
||||||
"defaultOpen": true,
|
"defaultOpen": true,
|
||||||
"pages": ["self-hosting", "migrating-from-clerk"]
|
"pages": [
|
||||||
|
"[Get started](/docs/self-hosting/self-hosting)",
|
||||||
|
"changelog",
|
||||||
|
"migrating-from-clerk"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ docker compose cp ./users-dump.csv op-api:/app/packages/db/code-migrations/users
|
|||||||
<Step>
|
<Step>
|
||||||
Run the migration:
|
Run the migration:
|
||||||
```bash
|
```bash
|
||||||
docker compose exec -it op-api bash -c "cd /app/packages/db && pnpm migrate:deploy:db:code 2-accounts.ts"
|
docker compose exec -it op-api bash -c "cd /app/packages/db && pnpm migrate:deploy:code 2-accounts.ts"
|
||||||
```
|
```
|
||||||
</Step>
|
</Step>
|
||||||
</Steps>
|
</Steps>
|
||||||
|
|||||||
@@ -1,13 +1,10 @@
|
|||||||
---
|
---
|
||||||
title: Self-hosting
|
title: Get started with self-hosting
|
||||||
description: This is a simple guide how to get started with OpenPanel on your own VPS.
|
description: This is a simple guide how to get started with OpenPanel on your own VPS.
|
||||||
---
|
---
|
||||||
|
|
||||||
import { Step, Steps } from 'fumadocs-ui/components/steps';
|
import { Step, Steps } from 'fumadocs-ui/components/steps';
|
||||||
|
|
||||||
<Callout>OpenPanel is not stable yet. If you still want to self-host you can go ahead. Bear in mind that new changes might give a little headache to keep up with.</Callout>
|
|
||||||
<Callout>From version 0.0.5 we have removed Clerk.com. If you are upgrading from a previous version, you will need to export your users from Clerk and import them into OpenPanel. Read more about it here: [Migrating from Clerk](/docs/self-hosting/migrating-from-clerk)</Callout>
|
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
@@ -54,8 +51,8 @@ cd openpanel/self-hosting
|
|||||||
|
|
||||||
1. Install docker
|
1. Install docker
|
||||||
2. Install node
|
2. Install node
|
||||||
3. Install pnpm
|
3. Install npm
|
||||||
4. Run the `npx jiti ./quiz.ts` script inside the self-hosting folder
|
4. Run the `npm run quiz` script inside the self-hosting folder
|
||||||
|
|
||||||
</Step>
|
</Step>
|
||||||
<Step>
|
<Step>
|
||||||
@@ -110,6 +107,7 @@ Some of OpenPanel's features require e-mail. We use Resend as our transactional
|
|||||||
<Callout>This is nothing that is required for the basic setup, but it is required for some features.</Callout>
|
<Callout>This is nothing that is required for the basic setup, but it is required for some features.</Callout>
|
||||||
|
|
||||||
Features that require e-mail:
|
Features that require e-mail:
|
||||||
|
|
||||||
- Password reset
|
- Password reset
|
||||||
- Invitations
|
- Invitations
|
||||||
- more will be added over time
|
- more will be added over time
|
||||||
@@ -120,4 +118,62 @@ If you use a managed Redis service, you may need to set the `notify-keyspace-eve
|
|||||||
|
|
||||||
Without this setting we wont be able to listen for expired keys which we use for caluclating currently active vistors.
|
Without this setting we wont be able to listen for expired keys which we use for caluclating currently active vistors.
|
||||||
|
|
||||||
> You will see a warning in the logs if this needs to be set manually.
|
> You will see a warning in the logs if this needs to be set manually.
|
||||||
|
|
||||||
|
### Registration / Invitations
|
||||||
|
|
||||||
|
By default registrations are disabled after the first user is created.
|
||||||
|
|
||||||
|
You can change this by setting the `ALLOW_REGISTRATION` environment variable to `true`.
|
||||||
|
|
||||||
|
```bash title=".env"
|
||||||
|
ALLOW_REGISTRATION=true
|
||||||
|
```
|
||||||
|
|
||||||
|
Invitations are enabled by default. You can also disable invitations by setting the `ALLOW_INVITATION` environment variable to `false`.
|
||||||
|
|
||||||
|
```bash title=".env"
|
||||||
|
ALLOW_INVITATION=false
|
||||||
|
```
|
||||||
|
|
||||||
|
## Helpful scripts
|
||||||
|
|
||||||
|
OpenPanel comes with several utility scripts to help manage your self-hosted instance:
|
||||||
|
|
||||||
|
### Basic Operations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./start # Start all OpenPanel services
|
||||||
|
./stop # Stop all OpenPanel services
|
||||||
|
./logs # View real-time logs from all services
|
||||||
|
```
|
||||||
|
|
||||||
|
### Maintenance
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./rebuild <service-name> # Rebuild and restart a specific service
|
||||||
|
# Example: ./rebuild op-dashboard
|
||||||
|
```
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./danger_wipe_everything # ⚠️ Removes all containers, volumes, and data
|
||||||
|
# Only use this if you want to start fresh!
|
||||||
|
```
|
||||||
|
|
||||||
|
<Callout>
|
||||||
|
The `danger_wipe_everything` script will delete all your OpenPanel data including databases, configurations, and cached files. Use with extreme caution!
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
All these scripts should be run from within the `self-hosting` directory. Make sure the scripts are executable (`chmod +x script-name` if needed).
|
||||||
|
|
||||||
|
## Updating
|
||||||
|
|
||||||
|
To grab the latest and greatest from OpenPanel you should just run the `./update` script inside the self-hosting folder.
|
||||||
|
|
||||||
|
<Callout>
|
||||||
|
If you don't have the `./update` script, you can run `git pull` and then `./update`
|
||||||
|
</Callout>
|
||||||
|
|
||||||
|
Also read any changes in the [changelog](/changelog) and apply them to your instance.
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"schemaVersion": 2,
|
|
||||||
"dockerfilePath": "./apps/api/Dockerfile"
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"schemaVersion": 2,
|
|
||||||
"dockerfilePath": "./apps/dashboard/Dockerfile"
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"schemaVersion": 2,
|
|
||||||
"dockerfilePath": "./apps/docs/Dockerfile"
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"schemaVersion": 2,
|
|
||||||
"dockerfilePath": "./apps/public/Dockerfile"
|
|
||||||
}
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
{
|
|
||||||
"schemaVersion": 2,
|
|
||||||
"dockerfilePath": "./apps/worker/Dockerfile"
|
|
||||||
}
|
|
||||||
@@ -35,6 +35,7 @@ services:
|
|||||||
- ./docker/data/op-ch-logs:/var/log/clickhouse-server
|
- ./docker/data/op-ch-logs:/var/log/clickhouse-server
|
||||||
- ./self-hosting/clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/op-config.xml
|
- ./self-hosting/clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/op-config.xml
|
||||||
- ./self-hosting/clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/op-user-config.xml
|
- ./self-hosting/clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/op-user-config.xml
|
||||||
|
- ./self-hosting/clickhouse/init-db.sh:/docker-entrypoint-initdb.d/init-db.sh:ro
|
||||||
ulimits:
|
ulimits:
|
||||||
nofile:
|
nofile:
|
||||||
soft: 262144
|
soft: 262144
|
||||||
@@ -43,18 +44,3 @@ services:
|
|||||||
- "8123:8123" # HTTP interface
|
- "8123:8123" # HTTP interface
|
||||||
- "9000:9000" # Native/TCP interface
|
- "9000:9000" # Native/TCP interface
|
||||||
- "9009:9009" # Inter-server communication
|
- "9009:9009" # Inter-server communication
|
||||||
|
|
||||||
op-zk:
|
|
||||||
image: clickhouse/clickhouse-server:24.3.2-alpine
|
|
||||||
volumes:
|
|
||||||
- ./docker/data/op-zk-data:/var/lib/clickhouse
|
|
||||||
- ./self-hosting/clickhouse/clickhouse-keeper-config.xml:/etc/clickhouse-server/config.xml
|
|
||||||
command: [ 'clickhouse-keeper', '--config-file', '/etc/clickhouse-server/config.xml' ]
|
|
||||||
restart: always
|
|
||||||
ulimits:
|
|
||||||
nofile:
|
|
||||||
soft: 262144
|
|
||||||
hard: 262144
|
|
||||||
ports:
|
|
||||||
- "9181:9181" # Keeper port
|
|
||||||
- "9234:9234" # Keeper Raft port
|
|
||||||
|
|||||||
372
packages/db/code-migrations/3-init-ch.ts
Normal file
372
packages/db/code-migrations/3-init-ch.ts
Normal file
@@ -0,0 +1,372 @@
|
|||||||
|
import fs from 'node:fs';
|
||||||
|
import path from 'node:path';
|
||||||
|
import { formatClickhouseDate } from '../src/clickhouse/client';
|
||||||
|
import {
|
||||||
|
createDatabase,
|
||||||
|
createMaterializedView,
|
||||||
|
createTable,
|
||||||
|
dropTable,
|
||||||
|
getExistingTables,
|
||||||
|
moveDataBetweenTables,
|
||||||
|
renameTable,
|
||||||
|
runClickhouseMigrationCommands,
|
||||||
|
} from '../src/clickhouse/migration';
|
||||||
|
import { printBoxMessage } from './helpers';
|
||||||
|
|
||||||
|
export async function up() {
|
||||||
|
const replicatedVersion = '1';
|
||||||
|
const existingTables = await getExistingTables();
|
||||||
|
const hasSelfHosting = existingTables.includes('self_hosting_distributed');
|
||||||
|
const hasEvents = existingTables.includes('events_distributed');
|
||||||
|
const hasEventsV2 = existingTables.includes('events_v2');
|
||||||
|
const hasEventsBots = existingTables.includes('events_bots_distributed');
|
||||||
|
const hasProfiles = existingTables.includes('profiles_distributed');
|
||||||
|
const hasProfileAliases = existingTables.includes(
|
||||||
|
'profile_aliases_distributed',
|
||||||
|
);
|
||||||
|
|
||||||
|
const isSelfHosting = !!process.env.SELF_HOSTING;
|
||||||
|
const isClustered = !isSelfHosting;
|
||||||
|
|
||||||
|
const isSelfHostingPostCluster =
|
||||||
|
existingTables.includes('events_replicated') && isSelfHosting;
|
||||||
|
|
||||||
|
const isSelfHostingPreCluster =
|
||||||
|
!isSelfHostingPostCluster &&
|
||||||
|
existingTables.includes('events_v2') &&
|
||||||
|
isSelfHosting;
|
||||||
|
|
||||||
|
const isSelfHostingOld = existingTables.length !== 0 && isSelfHosting;
|
||||||
|
|
||||||
|
const sqls: string[] = [];
|
||||||
|
|
||||||
|
// Move tables to old names if they exists
|
||||||
|
if (isSelfHostingOld) {
|
||||||
|
sqls.push(
|
||||||
|
...existingTables
|
||||||
|
.filter((table) => {
|
||||||
|
return (
|
||||||
|
!table.endsWith('_tmp') && !existingTables.includes(`${table}_tmp`)
|
||||||
|
);
|
||||||
|
})
|
||||||
|
.flatMap((table) => {
|
||||||
|
return renameTable({
|
||||||
|
from: table,
|
||||||
|
to: `${table}_tmp`,
|
||||||
|
isClustered: false,
|
||||||
|
});
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
sqls.push(
|
||||||
|
createDatabase('openpanel', isClustered),
|
||||||
|
// Create new tables
|
||||||
|
...createTable({
|
||||||
|
name: 'self_hosting',
|
||||||
|
columns: ['`created_at` Date', '`domain` String', '`count` UInt64'],
|
||||||
|
orderBy: ['domain', 'created_at'],
|
||||||
|
partitionBy: 'toYYYYMM(created_at)',
|
||||||
|
distributionHash: 'cityHash64(domain)',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
...createTable({
|
||||||
|
name: 'events',
|
||||||
|
columns: [
|
||||||
|
'`id` UUID DEFAULT generateUUIDv4()',
|
||||||
|
'`name` LowCardinality(String)',
|
||||||
|
'`sdk_name` LowCardinality(String)',
|
||||||
|
'`sdk_version` LowCardinality(String)',
|
||||||
|
'`device_id` String CODEC(ZSTD(3))',
|
||||||
|
'`profile_id` String CODEC(ZSTD(3))',
|
||||||
|
'`project_id` String CODEC(ZSTD(3))',
|
||||||
|
'`session_id` String CODEC(LZ4)',
|
||||||
|
'`path` String CODEC(ZSTD(3))',
|
||||||
|
'`origin` String CODEC(ZSTD(3))',
|
||||||
|
'`referrer` String CODEC(ZSTD(3))',
|
||||||
|
'`referrer_name` String CODEC(ZSTD(3))',
|
||||||
|
'`referrer_type` LowCardinality(String)',
|
||||||
|
'`duration` UInt64 CODEC(Delta(4), LZ4)',
|
||||||
|
'`properties` Map(String, String) CODEC(ZSTD(3))',
|
||||||
|
'`created_at` DateTime64(3) CODEC(DoubleDelta, ZSTD(3))',
|
||||||
|
'`country` LowCardinality(FixedString(2))',
|
||||||
|
'`city` String',
|
||||||
|
'`region` LowCardinality(String)',
|
||||||
|
'`longitude` Nullable(Float32) CODEC(Gorilla, LZ4)',
|
||||||
|
'`latitude` Nullable(Float32) CODEC(Gorilla, LZ4)',
|
||||||
|
'`os` LowCardinality(String)',
|
||||||
|
'`os_version` LowCardinality(String)',
|
||||||
|
'`browser` LowCardinality(String)',
|
||||||
|
'`browser_version` LowCardinality(String)',
|
||||||
|
'`device` LowCardinality(String)',
|
||||||
|
'`brand` LowCardinality(String)',
|
||||||
|
'`model` LowCardinality(String)',
|
||||||
|
'`imported_at` Nullable(DateTime) CODEC(Delta(4), LZ4)',
|
||||||
|
],
|
||||||
|
indices: [
|
||||||
|
'INDEX idx_name name TYPE bloom_filter GRANULARITY 1',
|
||||||
|
"INDEX idx_properties_bounce properties['__bounce'] TYPE set(3) GRANULARITY 1",
|
||||||
|
'INDEX idx_origin origin TYPE bloom_filter(0.05) GRANULARITY 1',
|
||||||
|
'INDEX idx_path path TYPE bloom_filter(0.01) GRANULARITY 1',
|
||||||
|
],
|
||||||
|
orderBy: ['project_id', 'toDate(created_at)', 'profile_id', 'name'],
|
||||||
|
partitionBy: 'toYYYYMM(created_at)',
|
||||||
|
settings: {
|
||||||
|
index_granularity: 8192,
|
||||||
|
},
|
||||||
|
distributionHash:
|
||||||
|
'cityHash64(project_id, toString(toStartOfHour(created_at)))',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
...createTable({
|
||||||
|
name: 'events_bots',
|
||||||
|
columns: [
|
||||||
|
'`id` UUID DEFAULT generateUUIDv4()',
|
||||||
|
'`project_id` String',
|
||||||
|
'`name` String',
|
||||||
|
'`type` String',
|
||||||
|
'`path` String',
|
||||||
|
'`created_at` DateTime64(3)',
|
||||||
|
],
|
||||||
|
orderBy: ['project_id', 'created_at'],
|
||||||
|
settings: {
|
||||||
|
index_granularity: 8192,
|
||||||
|
},
|
||||||
|
distributionHash:
|
||||||
|
'cityHash64(project_id, toString(toStartOfDay(created_at)))',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
...createTable({
|
||||||
|
name: 'profiles',
|
||||||
|
columns: [
|
||||||
|
'`id` String CODEC(ZSTD(3))',
|
||||||
|
'`is_external` Bool',
|
||||||
|
'`first_name` String CODEC(ZSTD(3))',
|
||||||
|
'`last_name` String CODEC(ZSTD(3))',
|
||||||
|
'`email` String CODEC(ZSTD(3))',
|
||||||
|
'`avatar` String CODEC(ZSTD(3))',
|
||||||
|
'`properties` Map(String, String) CODEC(ZSTD(3))',
|
||||||
|
'`project_id` String CODEC(ZSTD(3))',
|
||||||
|
'`created_at` DateTime64(3) CODEC(Delta(4), LZ4)',
|
||||||
|
],
|
||||||
|
indices: [
|
||||||
|
'INDEX idx_first_name first_name TYPE bloom_filter GRANULARITY 1',
|
||||||
|
'INDEX idx_last_name last_name TYPE bloom_filter GRANULARITY 1',
|
||||||
|
'INDEX idx_email email TYPE bloom_filter GRANULARITY 1',
|
||||||
|
],
|
||||||
|
engine: 'ReplacingMergeTree(created_at)',
|
||||||
|
orderBy: ['project_id', 'id'],
|
||||||
|
partitionBy: 'toYYYYMM(created_at)',
|
||||||
|
settings: {
|
||||||
|
index_granularity: 8192,
|
||||||
|
},
|
||||||
|
distributionHash: 'cityHash64(project_id)',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
...createTable({
|
||||||
|
name: 'profile_aliases',
|
||||||
|
columns: [
|
||||||
|
'`project_id` String',
|
||||||
|
'`profile_id` String',
|
||||||
|
'`alias` String',
|
||||||
|
'`created_at` DateTime',
|
||||||
|
],
|
||||||
|
orderBy: ['project_id', 'profile_id', 'alias', 'created_at'],
|
||||||
|
settings: {
|
||||||
|
index_granularity: 8192,
|
||||||
|
},
|
||||||
|
distributionHash: 'cityHash64(project_id)',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
|
||||||
|
// Create materialized views
|
||||||
|
...createMaterializedView({
|
||||||
|
name: 'dau_mv',
|
||||||
|
tableName: 'events',
|
||||||
|
orderBy: ['project_id', 'date'],
|
||||||
|
partitionBy: 'toYYYYMMDD(date)',
|
||||||
|
query: `SELECT
|
||||||
|
toDate(created_at) as date,
|
||||||
|
uniqState(profile_id) as profile_id,
|
||||||
|
project_id
|
||||||
|
FROM {events}
|
||||||
|
GROUP BY date, project_id`,
|
||||||
|
distributionHash: 'cityHash64(project_id, date)',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
...createMaterializedView({
|
||||||
|
name: 'cohort_events_mv',
|
||||||
|
tableName: 'events',
|
||||||
|
orderBy: ['project_id', 'name', 'created_at', 'profile_id'],
|
||||||
|
query: `SELECT
|
||||||
|
project_id,
|
||||||
|
name,
|
||||||
|
toDate(created_at) AS created_at,
|
||||||
|
profile_id,
|
||||||
|
COUNT() AS event_count
|
||||||
|
FROM {events}
|
||||||
|
WHERE profile_id != device_id
|
||||||
|
GROUP BY project_id, name, created_at, profile_id`,
|
||||||
|
distributionHash: 'cityHash64(project_id, toString(created_at))',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
...createMaterializedView({
|
||||||
|
name: 'distinct_event_names_mv',
|
||||||
|
tableName: 'events',
|
||||||
|
orderBy: ['project_id', 'name', 'created_at'],
|
||||||
|
query: `SELECT
|
||||||
|
project_id,
|
||||||
|
name,
|
||||||
|
max(created_at) AS created_at,
|
||||||
|
count() AS event_count
|
||||||
|
FROM {events}
|
||||||
|
GROUP BY project_id, name`,
|
||||||
|
distributionHash: 'cityHash64(name, created_at)',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
...createMaterializedView({
|
||||||
|
name: 'event_property_values_mv',
|
||||||
|
tableName: 'events',
|
||||||
|
orderBy: ['project_id', 'name', 'property_key', 'property_value'],
|
||||||
|
query: `SELECT
|
||||||
|
project_id,
|
||||||
|
name,
|
||||||
|
key_value.keys as property_key,
|
||||||
|
key_value.values as property_value,
|
||||||
|
created_at
|
||||||
|
FROM (
|
||||||
|
SELECT
|
||||||
|
project_id,
|
||||||
|
name,
|
||||||
|
untuple(arrayJoin(properties)) as key_value,
|
||||||
|
max(created_at) as created_at
|
||||||
|
FROM {events}
|
||||||
|
GROUP BY project_id, name, key_value
|
||||||
|
)
|
||||||
|
WHERE property_value != ''
|
||||||
|
AND property_key != ''
|
||||||
|
AND property_key NOT IN ('__duration_from', '__properties_from')
|
||||||
|
GROUP BY project_id, name, property_key, property_value, created_at`,
|
||||||
|
distributionHash: 'cityHash64(project_id, name)',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}),
|
||||||
|
);
|
||||||
|
|
||||||
|
if (isSelfHostingPostCluster) {
|
||||||
|
sqls.push(
|
||||||
|
// Move data between tables
|
||||||
|
...(hasSelfHosting
|
||||||
|
? moveDataBetweenTables({
|
||||||
|
from: 'self_hosting_replicated_tmp',
|
||||||
|
to: 'self_hosting',
|
||||||
|
batch: {
|
||||||
|
column: 'created_at',
|
||||||
|
interval: 'month',
|
||||||
|
transform: (date) => {
|
||||||
|
return formatClickhouseDate(date, true);
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
: []),
|
||||||
|
...(hasProfileAliases
|
||||||
|
? moveDataBetweenTables({
|
||||||
|
from: 'profile_aliases_replicated_tmp',
|
||||||
|
to: 'profile_aliases',
|
||||||
|
batch: {
|
||||||
|
column: 'created_at',
|
||||||
|
interval: 'month',
|
||||||
|
},
|
||||||
|
})
|
||||||
|
: []),
|
||||||
|
...(hasEventsBots
|
||||||
|
? moveDataBetweenTables({
|
||||||
|
from: 'events_bots_replicated_tmp',
|
||||||
|
to: 'events_bots',
|
||||||
|
batch: {
|
||||||
|
column: 'created_at',
|
||||||
|
interval: 'month',
|
||||||
|
},
|
||||||
|
})
|
||||||
|
: []),
|
||||||
|
...(hasProfiles
|
||||||
|
? moveDataBetweenTables({
|
||||||
|
from: 'profiles_replicated_tmp',
|
||||||
|
to: 'profiles',
|
||||||
|
batch: {
|
||||||
|
column: 'created_at',
|
||||||
|
interval: 'month',
|
||||||
|
},
|
||||||
|
})
|
||||||
|
: []),
|
||||||
|
...(hasEvents
|
||||||
|
? moveDataBetweenTables({
|
||||||
|
from: 'events_replicated_tmp',
|
||||||
|
to: 'events',
|
||||||
|
batch: {
|
||||||
|
column: 'created_at',
|
||||||
|
interval: 'week',
|
||||||
|
},
|
||||||
|
})
|
||||||
|
: []),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (isSelfHostingPreCluster) {
|
||||||
|
sqls.push(
|
||||||
|
...(hasEventsV2
|
||||||
|
? moveDataBetweenTables({
|
||||||
|
from: 'events_v2',
|
||||||
|
to: 'events',
|
||||||
|
batch: {
|
||||||
|
column: 'created_at',
|
||||||
|
interval: 'week',
|
||||||
|
},
|
||||||
|
})
|
||||||
|
: []),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fs.writeFileSync(
|
||||||
|
path.join(__dirname, '3-init-ch.sql'),
|
||||||
|
sqls
|
||||||
|
.map((sql) =>
|
||||||
|
sql
|
||||||
|
.trim()
|
||||||
|
.replace(/;$/, '')
|
||||||
|
.replace(/\n{2,}/g, '\n')
|
||||||
|
.concat(';'),
|
||||||
|
)
|
||||||
|
.join('\n\n---\n\n'),
|
||||||
|
);
|
||||||
|
|
||||||
|
printBoxMessage('Will start migration for self-hosting setup.', [
|
||||||
|
'This will move all data from the old tables to the new ones.',
|
||||||
|
'This might take a while depending on your server.',
|
||||||
|
]);
|
||||||
|
|
||||||
|
if (!process.argv.includes('--dry')) {
|
||||||
|
await runClickhouseMigrationCommands(sqls);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (isSelfHostingOld) {
|
||||||
|
printBoxMessage(
|
||||||
|
'⚠️ Please run the following command to clean up unused tables:',
|
||||||
|
existingTables.map(
|
||||||
|
(table) =>
|
||||||
|
`docker compose exec -it op-ch clickhouse-client --query "${dropTable(
|
||||||
|
`openpanel.${table}_tmp`,
|
||||||
|
false,
|
||||||
|
)}"`,
|
||||||
|
),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -5,7 +5,7 @@ import { printBoxMessage } from './helpers';
|
|||||||
|
|
||||||
async function migrate() {
|
async function migrate() {
|
||||||
const args = process.argv.slice(2);
|
const args = process.argv.slice(2);
|
||||||
const migration = args[0];
|
const migration = args.filter((arg) => !arg.startsWith('--'))[0];
|
||||||
|
|
||||||
const migrationsDir = path.join(__dirname, '..', 'code-migrations');
|
const migrationsDir = path.join(__dirname, '..', 'code-migrations');
|
||||||
const migrations = fs.readdirSync(migrationsDir).filter((file) => {
|
const migrations = fs.readdirSync(migrationsDir).filter((file) => {
|
||||||
@@ -22,7 +22,7 @@ async function migrate() {
|
|||||||
|
|
||||||
for (const file of migrations) {
|
for (const file of migrations) {
|
||||||
if (finishedMigrations.some((migration) => migration.name === file)) {
|
if (finishedMigrations.some((migration) => migration.name === file)) {
|
||||||
printBoxMessage('⏭️ Skipping Migration ⏭️', [`${file}`]);
|
printBoxMessage('✅ Already Migrated ✅', [`${file}`]);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
export * from './src/prisma-client';
|
export * from './src/prisma-client';
|
||||||
export * from './src/clickhouse-client';
|
export * from './src/clickhouse/client';
|
||||||
export * from './src/sql-builder';
|
export * from './src/sql-builder';
|
||||||
export * from './src/services/chart.service';
|
export * from './src/services/chart.service';
|
||||||
export * from './src/services/clients.service';
|
export * from './src/services/clients.service';
|
||||||
|
|||||||
@@ -1,112 +0,0 @@
|
|||||||
-- +goose Up
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS self_hosting
|
|
||||||
(
|
|
||||||
created_at Date,
|
|
||||||
domain String,
|
|
||||||
count UInt64
|
|
||||||
)
|
|
||||||
ENGINE = MergeTree()
|
|
||||||
ORDER BY (domain, created_at)
|
|
||||||
PARTITION BY toYYYYMM(created_at);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS events_v2 (
|
|
||||||
`id` UUID DEFAULT generateUUIDv4(),
|
|
||||||
`name` String,
|
|
||||||
`sdk_name` String,
|
|
||||||
`sdk_version` String,
|
|
||||||
`device_id` String,
|
|
||||||
`profile_id` String,
|
|
||||||
`project_id` String,
|
|
||||||
`session_id` String,
|
|
||||||
`path` String,
|
|
||||||
`origin` String,
|
|
||||||
`referrer` String,
|
|
||||||
`referrer_name` String,
|
|
||||||
`referrer_type` String,
|
|
||||||
`duration` UInt64,
|
|
||||||
`properties` Map(String, String),
|
|
||||||
`created_at` DateTime64(3),
|
|
||||||
`country` String,
|
|
||||||
`city` String,
|
|
||||||
`region` String,
|
|
||||||
`longitude` Nullable(Float32),
|
|
||||||
`latitude` Nullable(Float32),
|
|
||||||
`os` String,
|
|
||||||
`os_version` String,
|
|
||||||
`browser` String,
|
|
||||||
`browser_version` String,
|
|
||||||
`device` String,
|
|
||||||
`brand` String,
|
|
||||||
`model` String,
|
|
||||||
`imported_at` Nullable(DateTime),
|
|
||||||
INDEX idx_name name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_properties_bounce properties ['__bounce'] TYPE set (3) GRANULARITY 1,
|
|
||||||
INDEX idx_origin origin TYPE bloom_filter(0.05) GRANULARITY 1,
|
|
||||||
INDEX idx_path path TYPE bloom_filter(0.01) GRANULARITY 1
|
|
||||||
) ENGINE = MergeTree PARTITION BY toYYYYMM(created_at)
|
|
||||||
ORDER BY
|
|
||||||
(project_id, toDate(created_at), profile_id, name) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS events_bots (
|
|
||||||
`id` UUID DEFAULT generateUUIDv4(),
|
|
||||||
`project_id` String,
|
|
||||||
`name` String,
|
|
||||||
`type` String,
|
|
||||||
`path` String,
|
|
||||||
`created_at` DateTime64(3)
|
|
||||||
) ENGINE MergeTree
|
|
||||||
ORDER BY
|
|
||||||
(project_id, created_at) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS profiles (
|
|
||||||
`id` String,
|
|
||||||
`is_external` Bool,
|
|
||||||
`first_name` String,
|
|
||||||
`last_name` String,
|
|
||||||
`email` String,
|
|
||||||
`avatar` String,
|
|
||||||
`properties` Map(String, String),
|
|
||||||
`project_id` String,
|
|
||||||
`created_at` DateTime
|
|
||||||
) ENGINE = ReplacingMergeTree(created_at)
|
|
||||||
ORDER BY
|
|
||||||
(id) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS profile_aliases (
|
|
||||||
`project_id` String,
|
|
||||||
`profile_id` String,
|
|
||||||
`alias` String,
|
|
||||||
`created_at` DateTime
|
|
||||||
) ENGINE = MergeTree
|
|
||||||
ORDER BY
|
|
||||||
(project_id, profile_id, alias, created_at) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW IF NOT EXISTS dau_mv ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMMDD(date)
|
|
||||||
ORDER BY
|
|
||||||
(project_id, date) POPULATE AS
|
|
||||||
SELECT
|
|
||||||
toDate(created_at) as date,
|
|
||||||
uniqState(profile_id) as profile_id,
|
|
||||||
project_id
|
|
||||||
FROM
|
|
||||||
events_v2
|
|
||||||
GROUP BY
|
|
||||||
date,
|
|
||||||
project_id;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose Down
|
|
||||||
-- +goose StatementBegin
|
|
||||||
SELECT 'down SQL query';
|
|
||||||
-- +goose StatementEnd
|
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
-- +goose Up
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE profiles_tmp
|
|
||||||
(
|
|
||||||
`id` String,
|
|
||||||
`is_external` Bool,
|
|
||||||
`first_name` String,
|
|
||||||
`last_name` String,
|
|
||||||
`email` String,
|
|
||||||
`avatar` String,
|
|
||||||
`properties` Map(String, String),
|
|
||||||
`project_id` String,
|
|
||||||
`created_at` DateTime,
|
|
||||||
INDEX idx_first_name first_name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_last_name last_name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_email email TYPE bloom_filter GRANULARITY 1
|
|
||||||
)
|
|
||||||
ENGINE = ReplacingMergeTree(created_at)
|
|
||||||
PARTITION BY toYYYYMM(created_at)
|
|
||||||
ORDER BY (project_id, created_at, id)
|
|
||||||
SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO profiles_tmp SELECT
|
|
||||||
id,
|
|
||||||
is_external,
|
|
||||||
first_name,
|
|
||||||
last_name,
|
|
||||||
email,
|
|
||||||
avatar,
|
|
||||||
properties,
|
|
||||||
project_id,
|
|
||||||
created_at
|
|
||||||
FROM profiles;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
OPTIMIZE TABLE profiles_tmp FINAL;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
RENAME TABLE profiles TO profiles_old, profiles_tmp TO profiles;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE profiles_old;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
@@ -1,55 +0,0 @@
|
|||||||
-- +goose Up
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE profiles_fixed
|
|
||||||
(
|
|
||||||
`id` String,
|
|
||||||
`is_external` Bool,
|
|
||||||
`first_name` String,
|
|
||||||
`last_name` String,
|
|
||||||
`email` String,
|
|
||||||
`avatar` String,
|
|
||||||
`properties` Map(String, String),
|
|
||||||
`project_id` String,
|
|
||||||
`created_at` DateTime,
|
|
||||||
INDEX idx_first_name first_name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_last_name last_name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_email email TYPE bloom_filter GRANULARITY 1
|
|
||||||
)
|
|
||||||
ENGINE = ReplacingMergeTree(created_at)
|
|
||||||
PARTITION BY toYYYYMM(created_at)
|
|
||||||
ORDER BY (project_id, id)
|
|
||||||
SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO profiles_fixed SELECT
|
|
||||||
id,
|
|
||||||
is_external,
|
|
||||||
first_name,
|
|
||||||
last_name,
|
|
||||||
email,
|
|
||||||
avatar,
|
|
||||||
properties,
|
|
||||||
project_id,
|
|
||||||
created_at
|
|
||||||
FROM profiles;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
OPTIMIZE TABLE profiles_fixed FINAL;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
RENAME TABLE profiles TO profiles_old, profiles_fixed TO profiles;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE profiles_old;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
|
|
||||||
-- +goose Down
|
|
||||||
-- +goose StatementBegin
|
|
||||||
-- This is a destructive migration, so the down migration is not provided.
|
|
||||||
-- If needed, you should restore from a backup.
|
|
||||||
SELECT 'down migration not implemented';
|
|
||||||
-- +goose StatementEnd
|
|
||||||
@@ -1,58 +0,0 @@
|
|||||||
-- +goose Up
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW cohort_events_mv ENGINE = AggregatingMergeTree()
|
|
||||||
ORDER BY (project_id, name, created_at, profile_id) POPULATE AS
|
|
||||||
SELECT project_id,
|
|
||||||
name,
|
|
||||||
toDate(created_at) AS created_at,
|
|
||||||
profile_id,
|
|
||||||
COUNT() AS event_count
|
|
||||||
FROM events_v2
|
|
||||||
WHERE profile_id != device_id
|
|
||||||
GROUP BY project_id,
|
|
||||||
name,
|
|
||||||
created_at,
|
|
||||||
profile_id;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW distinct_event_names_mv ENGINE = AggregatingMergeTree()
|
|
||||||
ORDER BY (project_id, name, created_at) POPULATE AS
|
|
||||||
SELECT project_id,
|
|
||||||
name,
|
|
||||||
max(created_at) AS created_at,
|
|
||||||
count() AS event_count
|
|
||||||
FROM events_v2
|
|
||||||
GROUP BY project_id,
|
|
||||||
name;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW event_property_values_mv ENGINE = AggregatingMergeTree()
|
|
||||||
ORDER BY (project_id, name, property_key, property_value) POPULATE AS
|
|
||||||
select project_id,
|
|
||||||
name,
|
|
||||||
key_value.keys as property_key,
|
|
||||||
key_value.values as property_value,
|
|
||||||
created_at
|
|
||||||
from (
|
|
||||||
SELECT project_id,
|
|
||||||
name,
|
|
||||||
untuple(arrayJoin(properties)) as key_value,
|
|
||||||
max(created_at) as created_at
|
|
||||||
from events_v2
|
|
||||||
group by project_id,
|
|
||||||
name,
|
|
||||||
key_value
|
|
||||||
)
|
|
||||||
where property_value != ''
|
|
||||||
and property_key != ''
|
|
||||||
and property_key NOT IN ('__duration_from', '__properties_from')
|
|
||||||
group by project_id,
|
|
||||||
name,
|
|
||||||
property_key,
|
|
||||||
property_value,
|
|
||||||
created_at;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose Down
|
|
||||||
-- +goose StatementBegin
|
|
||||||
SELECT 'down SQL query';
|
|
||||||
-- +goose StatementEnd
|
|
||||||
@@ -1,351 +0,0 @@
|
|||||||
-- +goose Up
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE DATABASE IF NOT EXISTS openpanel;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS self_hosting_replicated ON CLUSTER '{cluster}' (
|
|
||||||
created_at Date,
|
|
||||||
domain String,
|
|
||||||
count UInt64
|
|
||||||
) ENGINE = ReplicatedMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/self_hosting_replicated',
|
|
||||||
'{replica}'
|
|
||||||
)
|
|
||||||
ORDER BY (domain, created_at) PARTITION BY toYYYYMM(created_at);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS events_replicated ON CLUSTER '{cluster}' (
|
|
||||||
`id` UUID DEFAULT generateUUIDv4(),
|
|
||||||
`name` String,
|
|
||||||
`sdk_name` String,
|
|
||||||
`sdk_version` String,
|
|
||||||
`device_id` String,
|
|
||||||
`profile_id` String,
|
|
||||||
`project_id` String,
|
|
||||||
`session_id` String,
|
|
||||||
`path` String,
|
|
||||||
`origin` String,
|
|
||||||
`referrer` String,
|
|
||||||
`referrer_name` String,
|
|
||||||
`referrer_type` String,
|
|
||||||
`duration` UInt64,
|
|
||||||
`properties` Map(String, String),
|
|
||||||
`created_at` DateTime64(3),
|
|
||||||
`country` String,
|
|
||||||
`city` String,
|
|
||||||
`region` String,
|
|
||||||
`longitude` Nullable(Float32),
|
|
||||||
`latitude` Nullable(Float32),
|
|
||||||
`os` String,
|
|
||||||
`os_version` String,
|
|
||||||
`browser` String,
|
|
||||||
`browser_version` String,
|
|
||||||
`device` String,
|
|
||||||
`brand` String,
|
|
||||||
`model` String,
|
|
||||||
`imported_at` Nullable(DateTime),
|
|
||||||
INDEX idx_name name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_properties_bounce properties ['__bounce'] TYPE
|
|
||||||
set(3) GRANULARITY 1,
|
|
||||||
INDEX idx_origin origin TYPE bloom_filter(0.05) GRANULARITY 1,
|
|
||||||
INDEX idx_path path TYPE bloom_filter(0.01) GRANULARITY 1
|
|
||||||
) ENGINE = ReplicatedMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/events_replicated',
|
|
||||||
'{replica}'
|
|
||||||
) PARTITION BY toYYYYMM(created_at)
|
|
||||||
ORDER BY (project_id, toDate(created_at), profile_id, name) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS events_bots_replicated ON CLUSTER '{cluster}' (
|
|
||||||
`id` UUID DEFAULT generateUUIDv4(),
|
|
||||||
`project_id` String,
|
|
||||||
`name` String,
|
|
||||||
`type` String,
|
|
||||||
`path` String,
|
|
||||||
`created_at` DateTime64(3)
|
|
||||||
) ENGINE = ReplicatedMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/events_bots_replicated',
|
|
||||||
'{replica}'
|
|
||||||
)
|
|
||||||
ORDER BY (project_id, created_at) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS profiles_replicated ON CLUSTER '{cluster}' (
|
|
||||||
`id` String,
|
|
||||||
`is_external` Bool,
|
|
||||||
`first_name` String,
|
|
||||||
`last_name` String,
|
|
||||||
`email` String,
|
|
||||||
`avatar` String,
|
|
||||||
`properties` Map(String, String),
|
|
||||||
`project_id` String,
|
|
||||||
`created_at` DateTime,
|
|
||||||
INDEX idx_first_name first_name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_last_name last_name TYPE bloom_filter GRANULARITY 1,
|
|
||||||
INDEX idx_email email TYPE bloom_filter GRANULARITY 1
|
|
||||||
) ENGINE = ReplicatedReplacingMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/profiles_replicated',
|
|
||||||
'{replica}',
|
|
||||||
created_at
|
|
||||||
) PARTITION BY toYYYYMM(created_at)
|
|
||||||
ORDER BY (project_id, id) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS profile_aliases_replicated ON CLUSTER '{cluster}' (
|
|
||||||
`project_id` String,
|
|
||||||
`profile_id` String,
|
|
||||||
`alias` String,
|
|
||||||
`created_at` DateTime
|
|
||||||
) ENGINE = ReplicatedMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/profile_aliases_replicated',
|
|
||||||
'{replica}'
|
|
||||||
)
|
|
||||||
ORDER BY (project_id, profile_id, alias, created_at) SETTINGS index_granularity = 8192;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW IF NOT EXISTS dau_mv_replicated ON CLUSTER '{cluster}' ENGINE = ReplicatedAggregatingMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/dau_mv_replicated',
|
|
||||||
'{replica}'
|
|
||||||
) PARTITION BY toYYYYMMDD(date)
|
|
||||||
ORDER BY (project_id, date) AS
|
|
||||||
SELECT toDate(created_at) as date,
|
|
||||||
uniqState(profile_id) as profile_id,
|
|
||||||
project_id
|
|
||||||
FROM events_replicated
|
|
||||||
GROUP BY date,
|
|
||||||
project_id;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW IF NOT EXISTS cohort_events_mv_replicated ON CLUSTER '{cluster}' ENGINE = ReplicatedAggregatingMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/cohort_events_mv_replicated',
|
|
||||||
'{replica}'
|
|
||||||
)
|
|
||||||
ORDER BY (project_id, name, created_at, profile_id) AS
|
|
||||||
SELECT project_id,
|
|
||||||
name,
|
|
||||||
toDate(created_at) AS created_at,
|
|
||||||
profile_id,
|
|
||||||
COUNT() AS event_count
|
|
||||||
FROM events_replicated
|
|
||||||
WHERE profile_id != device_id
|
|
||||||
GROUP BY project_id,
|
|
||||||
name,
|
|
||||||
created_at,
|
|
||||||
profile_id;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW IF NOT EXISTS distinct_event_names_mv_replicated ON CLUSTER '{cluster}' ENGINE = ReplicatedAggregatingMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/distinct_event_names_mv_replicated',
|
|
||||||
'{replica}'
|
|
||||||
)
|
|
||||||
ORDER BY (project_id, name, created_at) AS
|
|
||||||
SELECT project_id,
|
|
||||||
name,
|
|
||||||
max(created_at) AS created_at,
|
|
||||||
count() AS event_count
|
|
||||||
FROM events_replicated
|
|
||||||
GROUP BY project_id,
|
|
||||||
name;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE MATERIALIZED VIEW IF NOT EXISTS event_property_values_mv_replicated ON CLUSTER '{cluster}' ENGINE = ReplicatedAggregatingMergeTree(
|
|
||||||
'/clickhouse/tables/{shard}/event_property_values_mv_replicated',
|
|
||||||
'{replica}'
|
|
||||||
)
|
|
||||||
ORDER BY (project_id, name, property_key, property_value) AS
|
|
||||||
SELECT project_id,
|
|
||||||
name,
|
|
||||||
key_value.keys as property_key,
|
|
||||||
key_value.values as property_value,
|
|
||||||
created_at
|
|
||||||
FROM (
|
|
||||||
SELECT project_id,
|
|
||||||
name,
|
|
||||||
untuple(arrayJoin(properties)) as key_value,
|
|
||||||
max(created_at) as created_at
|
|
||||||
FROM events_replicated
|
|
||||||
GROUP BY project_id,
|
|
||||||
name,
|
|
||||||
key_value
|
|
||||||
)
|
|
||||||
WHERE property_value != ''
|
|
||||||
AND property_key != ''
|
|
||||||
AND property_key NOT IN ('__duration_from', '__properties_from')
|
|
||||||
GROUP BY project_id,
|
|
||||||
name,
|
|
||||||
property_key,
|
|
||||||
property_value,
|
|
||||||
created_at;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS self_hosting_distributed ON CLUSTER '{cluster}' AS self_hosting_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
self_hosting_replicated,
|
|
||||||
cityHash64(domain)
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS events_distributed ON CLUSTER '{cluster}' AS events_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
events_replicated,
|
|
||||||
cityHash64(project_id, toString(toStartOfHour(created_at)))
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS events_bots_distributed ON CLUSTER '{cluster}' AS events_bots_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
events_bots_replicated,
|
|
||||||
cityHash64(project_id, toString(toStartOfDay(created_at)))
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS profiles_distributed ON CLUSTER '{cluster}' AS profiles_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
profiles_replicated,
|
|
||||||
cityHash64(project_id)
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS dau_mv_distributed ON CLUSTER '{cluster}' AS dau_mv_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
dau_mv_replicated,
|
|
||||||
rand()
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS cohort_events_mv_distributed ON CLUSTER '{cluster}' AS cohort_events_mv_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
cohort_events_mv_replicated,
|
|
||||||
rand()
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS distinct_event_names_mv_distributed ON CLUSTER '{cluster}' AS distinct_event_names_mv_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
distinct_event_names_mv_replicated,
|
|
||||||
rand()
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS event_property_values_mv_distributed ON CLUSTER '{cluster}' AS event_property_values_mv_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
event_property_values_mv_replicated,
|
|
||||||
rand()
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
CREATE TABLE IF NOT EXISTS profile_aliases_distributed ON CLUSTER '{cluster}' AS profile_aliases_replicated ENGINE = Distributed(
|
|
||||||
'{cluster}',
|
|
||||||
openpanel,
|
|
||||||
profile_aliases_replicated,
|
|
||||||
cityHash64(project_id)
|
|
||||||
);
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO events_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM events_v2;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO events_bots_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM events_bots;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO profiles_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM profiles;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO profile_aliases_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM profile_aliases;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO self_hosting_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM self_hosting;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO dau_mv_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM dau_mv;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO cohort_events_mv_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM cohort_events_mv;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO distinct_event_names_mv_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM distinct_event_names_mv;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
INSERT INTO event_property_values_mv_replicated
|
|
||||||
SELECT *
|
|
||||||
FROM event_property_values_mv;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose Down
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS events_distributed ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS events_bots_distributed ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS profiles_distributed ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS events_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS events_bots_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS profiles_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS profile_aliases_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS dau_mv_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS cohort_events_mv_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS distinct_event_names_mv_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS event_property_values_mv_replicated ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS dau_mv_distributed ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS cohort_events_mv_distributed ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS distinct_event_names_mv_distributed ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
-- +goose StatementBegin
|
|
||||||
DROP TABLE IF EXISTS event_property_values_mv_distributed ON CLUSTER '{cluster}' SYNC;
|
|
||||||
-- +goose StatementEnd
|
|
||||||
TRUNCATE TABLE events_replicated;
|
|
||||||
TRUNCATE TABLE events_bots_replicated;
|
|
||||||
TRUNCATE TABLE profiles_replicated;
|
|
||||||
TRUNCATE TABLE profile_aliases_replicated;
|
|
||||||
TRUNCATE TABLE self_hosting_replicated;
|
|
||||||
TRUNCATE TABLE dau_mv_replicated;
|
|
||||||
TRUNCATE TABLE cohort_events_mv_replicated;
|
|
||||||
TRUNCATE TABLE distinct_event_names_mv_replicated;
|
|
||||||
TRUNCATE TABLE event_property_values_mv_replicated;
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
|
|
||||||
if [ -n "$CLICKHOUSE_URL_DIRECT" ]; then
|
|
||||||
export GOOSE_DBSTRING=$CLICKHOUSE_URL_DIRECT
|
|
||||||
elif [ -z "$CLICKHOUSE_URL" ]; then
|
|
||||||
echo "Neither CLICKHOUSE_URL_DIRECT nor CLICKHOUSE_URL is set"
|
|
||||||
exit 1
|
|
||||||
else
|
|
||||||
export GOOSE_DBSTRING=$CLICKHOUSE_URL
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Clickhouse migration script"
|
|
||||||
echo ""
|
|
||||||
echo "================="
|
|
||||||
echo "Selected database: $GOOSE_DBSTRING"
|
|
||||||
echo "================="
|
|
||||||
echo ""
|
|
||||||
if [ "$1" != "create" ] && [ -z "$CI" ]; then
|
|
||||||
read -p "Are you sure you want to run migrations on this database? (y/n) " -n 1 -r
|
|
||||||
echo
|
|
||||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
|
||||||
echo "Migration cancelled."
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
goose clickhouse --dir ./migrations $@
|
|
||||||
@@ -3,13 +3,11 @@
|
|||||||
"version": "0.0.1",
|
"version": "0.0.1",
|
||||||
"main": "index.ts",
|
"main": "index.ts",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"goose": "pnpm with-env ./migrations/goose",
|
|
||||||
"codegen": "pnpm with-env prisma generate",
|
"codegen": "pnpm with-env prisma generate",
|
||||||
"migrate": "pnpm with-env prisma migrate dev",
|
"migrate": "pnpm with-env prisma migrate dev",
|
||||||
"migrate:deploy:db:code": "pnpm with-env jiti ./code-migrations/migrate.ts",
|
"migrate:deploy:code": "pnpm with-env jiti ./code-migrations/migrate.ts",
|
||||||
"migrate:deploy:db": "pnpm with-env prisma migrate deploy",
|
"migrate:deploy:db": "pnpm with-env prisma migrate deploy",
|
||||||
"migrate:deploy:ch": "pnpm goose up",
|
"migrate:deploy": "pnpm migrate:deploy:db && pnpm migrate:deploy:code",
|
||||||
"migrate:deploy": "pnpm migrate:deploy:db && pnpm migrate:deploy:db:code && pnpm migrate:deploy:ch",
|
|
||||||
"typecheck": "tsc --noEmit",
|
"typecheck": "tsc --noEmit",
|
||||||
"with-env": "dotenv -e ../../.env -c --"
|
"with-env": "dotenv -e ../../.env -c --"
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import { type Redis, getRedisCache, runEvery } from '@openpanel/redis';
|
import { type Redis, getRedisCache, runEvery } from '@openpanel/redis';
|
||||||
|
|
||||||
import { getSafeJson } from '@openpanel/common';
|
import { getSafeJson } from '@openpanel/common';
|
||||||
import { TABLE_NAMES, ch } from '../clickhouse-client';
|
import { TABLE_NAMES, ch } from '../clickhouse/client';
|
||||||
import type { IClickhouseBotEvent } from '../services/event.service';
|
import type { IClickhouseBotEvent } from '../services/event.service';
|
||||||
import { BaseBuffer } from './base-buffer';
|
import { BaseBuffer } from './base-buffer';
|
||||||
|
|
||||||
|
|||||||
@@ -1,266 +0,0 @@
|
|||||||
import { generateId, getSafeJson } from '@openpanel/common';
|
|
||||||
import type { ILogger } from '@openpanel/logger';
|
|
||||||
import { createLogger } from '@openpanel/logger';
|
|
||||||
import { getRedisCache } from '@openpanel/redis';
|
|
||||||
import { pathOr } from 'ramda';
|
|
||||||
|
|
||||||
export type Find<T, R = unknown> = (
|
|
||||||
callback: (item: T) => boolean,
|
|
||||||
) => Promise<R | null>;
|
|
||||||
|
|
||||||
export type FindMany<T, R = unknown> = (
|
|
||||||
callback: (item: T) => boolean,
|
|
||||||
) => Promise<R[]>;
|
|
||||||
|
|
||||||
export class RedisBuffer<T> {
|
|
||||||
public name: string;
|
|
||||||
protected prefix = 'op:buffer';
|
|
||||||
protected bufferKey: string;
|
|
||||||
private lockKey: string;
|
|
||||||
protected maxBufferSize: number | null;
|
|
||||||
protected logger: ILogger;
|
|
||||||
|
|
||||||
constructor(bufferName: string, maxBufferSize: number | null) {
|
|
||||||
this.name = bufferName;
|
|
||||||
this.bufferKey = bufferName;
|
|
||||||
this.lockKey = `lock:${bufferName}`;
|
|
||||||
this.maxBufferSize = maxBufferSize;
|
|
||||||
this.logger = createLogger({ name: 'buffer' }).child({
|
|
||||||
buffer: bufferName,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
protected getKey(name?: string) {
|
|
||||||
const key = `${this.prefix}:${this.bufferKey}`;
|
|
||||||
if (name) {
|
|
||||||
return `${key}:${name}`;
|
|
||||||
}
|
|
||||||
return key;
|
|
||||||
}
|
|
||||||
|
|
||||||
async add(item: T): Promise<void> {
|
|
||||||
try {
|
|
||||||
this.onAdd(item);
|
|
||||||
await getRedisCache().rpush(this.getKey(), JSON.stringify(item));
|
|
||||||
const bufferSize = await getRedisCache().llen(this.getKey());
|
|
||||||
|
|
||||||
this.logger.debug(
|
|
||||||
`Item added (${pathOr('unknown', ['id'], item)}) Current size: ${bufferSize}`,
|
|
||||||
);
|
|
||||||
|
|
||||||
if (this.maxBufferSize && bufferSize >= this.maxBufferSize) {
|
|
||||||
await this.tryFlush();
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
this.logger.error('Failed to add item to buffer', { error, item });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public async tryFlush(): Promise<void> {
|
|
||||||
const lockId = generateId();
|
|
||||||
const acquired = await getRedisCache().set(
|
|
||||||
this.lockKey,
|
|
||||||
lockId,
|
|
||||||
'EX',
|
|
||||||
60,
|
|
||||||
'NX',
|
|
||||||
);
|
|
||||||
|
|
||||||
if (acquired === 'OK') {
|
|
||||||
this.logger.info(`Lock acquired. Attempting to flush. ID: ${lockId}`);
|
|
||||||
try {
|
|
||||||
await this.flush();
|
|
||||||
} catch (error) {
|
|
||||||
this.logger.error(`Failed to flush buffer. ID: ${lockId}`, { error });
|
|
||||||
} finally {
|
|
||||||
this.logger.info(`Releasing lock. ID: ${lockId}`);
|
|
||||||
await this.releaseLock(lockId);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
this.logger.warn(`Failed to acquire lock. Skipping flush. ID: ${lockId}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
protected async waitForReleasedLock(
|
|
||||||
maxWaitTime = 8000,
|
|
||||||
checkInterval = 250,
|
|
||||||
): Promise<boolean> {
|
|
||||||
const startTime = performance.now();
|
|
||||||
|
|
||||||
while (performance.now() - startTime < maxWaitTime) {
|
|
||||||
const lock = await getRedisCache().get(this.lockKey);
|
|
||||||
if (!lock) {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
await new Promise((resolve) => setTimeout(resolve, checkInterval));
|
|
||||||
}
|
|
||||||
|
|
||||||
this.logger.warn('Timeout waiting for lock release');
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
private async retryOnce(cb: () => Promise<void>) {
|
|
||||||
try {
|
|
||||||
await cb();
|
|
||||||
} catch (e) {
|
|
||||||
this.logger.error(`#1 Failed to execute callback: ${cb.name}`, e);
|
|
||||||
await new Promise((resolve) => setTimeout(resolve, 1000));
|
|
||||||
try {
|
|
||||||
await cb();
|
|
||||||
} catch (e) {
|
|
||||||
this.logger.error(`#2 Failed to execute callback: ${cb.name}`, e);
|
|
||||||
throw e;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private async flush(): Promise<void> {
|
|
||||||
// Use a transaction to ensure atomicity
|
|
||||||
const result = await getRedisCache()
|
|
||||||
.multi()
|
|
||||||
.lrange(this.getKey(), 0, -1)
|
|
||||||
.lrange(this.getKey('backup'), 0, -1)
|
|
||||||
.del(this.getKey())
|
|
||||||
.exec();
|
|
||||||
|
|
||||||
if (!result) {
|
|
||||||
this.logger.error('No result from redis transaction', {
|
|
||||||
result,
|
|
||||||
});
|
|
||||||
throw new Error('Redis transaction failed');
|
|
||||||
}
|
|
||||||
|
|
||||||
const lrange = result[0];
|
|
||||||
const lrangePrevious = result[1];
|
|
||||||
|
|
||||||
if (!lrange || lrange[0] instanceof Error) {
|
|
||||||
this.logger.error('Error from lrange', {
|
|
||||||
result,
|
|
||||||
});
|
|
||||||
throw new Error('Redis transaction failed');
|
|
||||||
}
|
|
||||||
|
|
||||||
const items = lrange[1] as string[];
|
|
||||||
if (
|
|
||||||
lrangePrevious &&
|
|
||||||
lrangePrevious[0] === null &&
|
|
||||||
Array.isArray(lrangePrevious[1])
|
|
||||||
) {
|
|
||||||
items.push(...(lrangePrevious[1] as string[]));
|
|
||||||
}
|
|
||||||
|
|
||||||
const parsedItems = items
|
|
||||||
.map((item) => getSafeJson<T | null>(item) as T | null)
|
|
||||||
.filter((item): item is T => item !== null);
|
|
||||||
|
|
||||||
if (parsedItems.length === 0) {
|
|
||||||
this.logger.debug('No items to flush');
|
|
||||||
// Clear any existing backup since we have no items to process
|
|
||||||
await getRedisCache().del(this.getKey('backup'));
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
this.logger.info(`Flushing ${parsedItems.length} items`);
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Create backup before processing
|
|
||||||
await getRedisCache().del(this.getKey('backup')); // Clear any existing backup first
|
|
||||||
await getRedisCache().lpush(
|
|
||||||
this.getKey('backup'),
|
|
||||||
...parsedItems.map((item) => JSON.stringify(item)),
|
|
||||||
);
|
|
||||||
|
|
||||||
const { toInsert, toKeep } = await this.processItems(parsedItems);
|
|
||||||
|
|
||||||
if (toInsert.length) {
|
|
||||||
await this.retryOnce(() => this.insertIntoDB(toInsert));
|
|
||||||
this.onInsert(toInsert);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add back items to keep
|
|
||||||
if (toKeep.length > 0) {
|
|
||||||
await getRedisCache().lpush(
|
|
||||||
this.getKey(),
|
|
||||||
...toKeep.map((item) => JSON.stringify(item)),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clear backup
|
|
||||||
await getRedisCache().del(this.getKey('backup'));
|
|
||||||
|
|
||||||
this.logger.info(
|
|
||||||
`Inserted ${toInsert.length} items into DB, kept ${toKeep.length} items in buffer`,
|
|
||||||
{
|
|
||||||
toInsert: toInsert.length,
|
|
||||||
toKeep: toKeep.length,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
} catch (error) {
|
|
||||||
this.logger.error('Failed to process queue while flushing buffer', {
|
|
||||||
error,
|
|
||||||
queueSize: parsedItems.length,
|
|
||||||
});
|
|
||||||
|
|
||||||
if (parsedItems.length > 0) {
|
|
||||||
// Add back items to keep
|
|
||||||
this.logger.info('Adding all items back to buffer');
|
|
||||||
await getRedisCache().lpush(
|
|
||||||
this.getKey(),
|
|
||||||
...parsedItems.map((item) => JSON.stringify(item)),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clear the backup since we're adding items back to main buffer
|
|
||||||
await getRedisCache().del(this.getKey('backup'));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private async releaseLock(lockId: string): Promise<void> {
|
|
||||||
this.logger.debug(`Released lock for ${this.getKey()}`);
|
|
||||||
const script = `
|
|
||||||
if redis.call("get", KEYS[1]) == ARGV[1] then
|
|
||||||
return redis.call("del", KEYS[1])
|
|
||||||
else
|
|
||||||
return 0
|
|
||||||
end
|
|
||||||
`;
|
|
||||||
await getRedisCache().eval(script, 1, this.lockKey, lockId);
|
|
||||||
}
|
|
||||||
|
|
||||||
protected async getQueue(count?: number): Promise<T[]> {
|
|
||||||
try {
|
|
||||||
const items = await getRedisCache().lrange(this.getKey(), 0, count ?? -1);
|
|
||||||
return items
|
|
||||||
.map((item) => getSafeJson<T | null>(item) as T | null)
|
|
||||||
.filter((item): item is T => item !== null);
|
|
||||||
} catch (error) {
|
|
||||||
this.logger.error('Failed to get queue', { error });
|
|
||||||
return [];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
protected processItems(items: T[]): Promise<{ toInsert: T[]; toKeep: T[] }> {
|
|
||||||
return Promise.resolve({ toInsert: items, toKeep: [] });
|
|
||||||
}
|
|
||||||
|
|
||||||
protected insertIntoDB(_items: T[]): Promise<void> {
|
|
||||||
throw new Error('Not implemented');
|
|
||||||
}
|
|
||||||
|
|
||||||
protected onAdd(_item: T): void {
|
|
||||||
// Override in subclass
|
|
||||||
}
|
|
||||||
|
|
||||||
protected onInsert(_item: T[]): void {
|
|
||||||
// Override in subclass
|
|
||||||
}
|
|
||||||
|
|
||||||
public findMany: FindMany<T, unknown> = () => {
|
|
||||||
return Promise.resolve([]);
|
|
||||||
};
|
|
||||||
|
|
||||||
public find: Find<T, unknown> = () => {
|
|
||||||
return Promise.resolve(null);
|
|
||||||
};
|
|
||||||
}
|
|
||||||
@@ -5,7 +5,7 @@ import {
|
|||||||
getRedisPub,
|
getRedisPub,
|
||||||
runEvery,
|
runEvery,
|
||||||
} from '@openpanel/redis';
|
} from '@openpanel/redis';
|
||||||
import { ch } from '../clickhouse-client';
|
import { ch } from '../clickhouse/client';
|
||||||
import {
|
import {
|
||||||
type IClickhouseEvent,
|
type IClickhouseEvent,
|
||||||
type IServiceEvent,
|
type IServiceEvent,
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ import { getSafeJson } from '@openpanel/common';
|
|||||||
import { type Redis, getRedisCache } from '@openpanel/redis';
|
import { type Redis, getRedisCache } from '@openpanel/redis';
|
||||||
import { dissocPath, mergeDeepRight, omit, whereEq } from 'ramda';
|
import { dissocPath, mergeDeepRight, omit, whereEq } from 'ramda';
|
||||||
|
|
||||||
import { TABLE_NAMES, ch, chQuery } from '../clickhouse-client';
|
import { TABLE_NAMES, ch, chQuery } from '../clickhouse/client';
|
||||||
import type { IClickhouseProfile } from '../services/profile.service';
|
import type { IClickhouseProfile } from '../services/profile.service';
|
||||||
import { BaseBuffer } from './base-buffer';
|
import { BaseBuffer } from './base-buffer';
|
||||||
import { isPartialMatch } from './partial-json-match';
|
import { isPartialMatch } from './partial-json-match';
|
||||||
|
|||||||
454
packages/db/src/clickhouse/migration.ts
Normal file
454
packages/db/src/clickhouse/migration.ts
Normal file
@@ -0,0 +1,454 @@
|
|||||||
|
import crypto from 'node:crypto';
|
||||||
|
import { createClient } from './client';
|
||||||
|
import { formatClickhouseDate } from './client';
|
||||||
|
|
||||||
|
interface CreateTableOptions {
|
||||||
|
name: string;
|
||||||
|
columns: string[];
|
||||||
|
indices?: string[];
|
||||||
|
engine?: string;
|
||||||
|
orderBy: string[];
|
||||||
|
partitionBy?: string;
|
||||||
|
settings?: Record<string, string | number>;
|
||||||
|
distributionHash: string;
|
||||||
|
replicatedVersion: string;
|
||||||
|
isClustered: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
interface CreateMaterializedViewOptions {
|
||||||
|
name: string;
|
||||||
|
tableName: string;
|
||||||
|
query: string;
|
||||||
|
engine?: string;
|
||||||
|
orderBy: string[];
|
||||||
|
partitionBy?: string;
|
||||||
|
settings?: Record<string, string | number>;
|
||||||
|
populate?: boolean;
|
||||||
|
distributionHash: string;
|
||||||
|
replicatedVersion: string;
|
||||||
|
isClustered: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
const CLUSTER_REPLICA_PATH =
|
||||||
|
'/clickhouse/{installation}/{cluster}/tables/{shard}/openpanel/v{replicatedVersion}/{table}';
|
||||||
|
|
||||||
|
const replicated = (tableName: string) => `${tableName}_replicated`;
|
||||||
|
|
||||||
|
export const chMigrationClient = createClient({
|
||||||
|
url: process.env.CLICKHOUSE_URL,
|
||||||
|
request_timeout: 3600000, // 1 hour in milliseconds
|
||||||
|
keep_alive: {
|
||||||
|
enabled: true,
|
||||||
|
idle_socket_ttl: 8000,
|
||||||
|
},
|
||||||
|
compression: {
|
||||||
|
request: true,
|
||||||
|
},
|
||||||
|
clickhouse_settings: {
|
||||||
|
wait_end_of_query: 1,
|
||||||
|
// Ask ClickHouse to periodically send query execution progress in HTTP headers, creating some activity in the connection.
|
||||||
|
send_progress_in_http_headers: 1,
|
||||||
|
// The interval of sending these progress headers. Here it is less than 60s,
|
||||||
|
http_headers_progress_interval_ms: '50000',
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
export function createDatabase(name: string, isClustered: boolean) {
|
||||||
|
if (isClustered) {
|
||||||
|
return `CREATE DATABASE IF NOT EXISTS ${name} ON CLUSTER '{cluster}'`;
|
||||||
|
}
|
||||||
|
|
||||||
|
return `CREATE DATABASE IF NOT EXISTS ${name}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates SQL statements for table creation in ClickHouse
|
||||||
|
* Handles both clustered and non-clustered scenarios
|
||||||
|
*/
|
||||||
|
export function createTable({
|
||||||
|
name: tableName,
|
||||||
|
columns,
|
||||||
|
indices = [],
|
||||||
|
engine = 'MergeTree()',
|
||||||
|
orderBy = ['tuple()'],
|
||||||
|
partitionBy,
|
||||||
|
settings = {},
|
||||||
|
distributionHash,
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}: CreateTableOptions): string[] {
|
||||||
|
const columnDefinitions = [...columns, ...indices].join(',\n ');
|
||||||
|
|
||||||
|
const settingsClause = Object.entries(settings).length
|
||||||
|
? `SETTINGS ${Object.entries(settings)
|
||||||
|
.map(([key, value]) => `${key} = ${value}`)
|
||||||
|
.join(', ')}`
|
||||||
|
: '';
|
||||||
|
|
||||||
|
const partitionByClause = partitionBy ? `PARTITION BY ${partitionBy}` : '';
|
||||||
|
|
||||||
|
if (!isClustered) {
|
||||||
|
// Non-clustered scenario: single table
|
||||||
|
return [
|
||||||
|
`CREATE TABLE IF NOT EXISTS ${tableName} (
|
||||||
|
${columnDefinitions}
|
||||||
|
)
|
||||||
|
ENGINE = ${engine}
|
||||||
|
${partitionByClause}
|
||||||
|
ORDER BY (${orderBy.join(', ')})
|
||||||
|
${settingsClause}`.trim(),
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
return [
|
||||||
|
// Local replicated table
|
||||||
|
`CREATE TABLE IF NOT EXISTS ${replicated(tableName)} ON CLUSTER '{cluster}' (
|
||||||
|
${columnDefinitions}
|
||||||
|
)
|
||||||
|
ENGINE = Replicated${engine.replace(/^(.+?)\((.+?)?\)/, `$1('${CLUSTER_REPLICA_PATH.replace('{replicatedVersion}', replicatedVersion)}', '{replica}', $2)`).replace(/, \)$/, ')')}
|
||||||
|
${partitionByClause}
|
||||||
|
ORDER BY (${orderBy.join(', ')})
|
||||||
|
${settingsClause}`.trim(),
|
||||||
|
// Distributed table
|
||||||
|
`CREATE TABLE IF NOT EXISTS ${tableName} ON CLUSTER '{cluster}' AS ${replicated(tableName)}
|
||||||
|
ENGINE = Distributed('{cluster}', currentDatabase(), ${replicated(tableName)}, ${distributionHash})`,
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generates ALTER TABLE statements for adding columns
|
||||||
|
*/
|
||||||
|
export function addColumns(
|
||||||
|
tableName: string,
|
||||||
|
columns: string[],
|
||||||
|
isClustered: boolean,
|
||||||
|
): string[] {
|
||||||
|
if (isClustered) {
|
||||||
|
return columns.map(
|
||||||
|
(col) =>
|
||||||
|
`ALTER TABLE ${replicated(tableName)} ON CLUSTER '{cluster}' ADD COLUMN IF NOT EXISTS ${col}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return columns.map(
|
||||||
|
(col) => `ALTER TABLE ${tableName} ADD COLUMN IF NOT EXISTS ${col}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Generates ALTER TABLE statements for dropping columns
|
||||||
|
*/
|
||||||
|
export function dropColumns(
|
||||||
|
tableName: string,
|
||||||
|
columnNames: string[],
|
||||||
|
isClustered: boolean,
|
||||||
|
): string[] {
|
||||||
|
if (isClustered) {
|
||||||
|
return columnNames.map(
|
||||||
|
(colName) =>
|
||||||
|
`ALTER TABLE ${replicated(tableName)} ON CLUSTER '{cluster}' DROP COLUMN IF EXISTS ${colName}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return columnNames.map(
|
||||||
|
(colName) => `ALTER TABLE ${tableName} DROP COLUMN IF EXISTS ${colName}`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function getExistingTables() {
|
||||||
|
try {
|
||||||
|
const existingTablesQuery = await chMigrationClient.query({
|
||||||
|
query: `SELECT name FROM system.tables WHERE database = 'openpanel'`,
|
||||||
|
format: 'JSONEachRow',
|
||||||
|
});
|
||||||
|
return (await existingTablesQuery.json<{ name: string }>())
|
||||||
|
.map((table) => table.name)
|
||||||
|
.filter((table) => !table.includes('.inner_id'));
|
||||||
|
} catch (e) {
|
||||||
|
console.error(e);
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export function renameTable({
|
||||||
|
from,
|
||||||
|
to,
|
||||||
|
isClustered,
|
||||||
|
}: {
|
||||||
|
from: string;
|
||||||
|
to: string;
|
||||||
|
isClustered: boolean;
|
||||||
|
}) {
|
||||||
|
if (isClustered) {
|
||||||
|
return [
|
||||||
|
`RENAME TABLE ${replicated(from)} TO ${replicated(to)} ON CLUSTER '{cluster}'`,
|
||||||
|
`RENAME TABLE ${from} TO ${to} ON CLUSTER '{cluster}'`,
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
return [`RENAME TABLE ${from} TO ${to}`];
|
||||||
|
}
|
||||||
|
|
||||||
|
export function dropTable(tableName: string, isClustered: boolean) {
|
||||||
|
if (isClustered) {
|
||||||
|
return `DROP TABLE IF EXISTS ${tableName} ON CLUSTER '{cluster}'`;
|
||||||
|
}
|
||||||
|
|
||||||
|
return `DROP TABLE IF EXISTS ${tableName}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function moveDataBetweenTables({
|
||||||
|
from,
|
||||||
|
to,
|
||||||
|
batch,
|
||||||
|
}: {
|
||||||
|
from: string;
|
||||||
|
to: string;
|
||||||
|
batch?: {
|
||||||
|
column: string;
|
||||||
|
interval?: 'day' | 'week' | 'month';
|
||||||
|
transform?: (date: Date) => string;
|
||||||
|
endDate?: Date;
|
||||||
|
startDate?: Date;
|
||||||
|
};
|
||||||
|
}): string[] {
|
||||||
|
const sqls: string[] = [];
|
||||||
|
|
||||||
|
if (!batch) {
|
||||||
|
return [`INSERT INTO ${to} SELECT * FROM ${from}`];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start from today and go back 3 years
|
||||||
|
const endDate = batch.endDate || new Date();
|
||||||
|
if (!batch.endDate) {
|
||||||
|
endDate.setDate(endDate.getDate() + 1); // Add 1 day to include today
|
||||||
|
}
|
||||||
|
const startDate = batch.startDate || new Date();
|
||||||
|
if (!batch.startDate) {
|
||||||
|
startDate.setFullYear(startDate.getFullYear() - 3);
|
||||||
|
}
|
||||||
|
|
||||||
|
let currentDate = endDate;
|
||||||
|
const interval = batch.interval || 'day';
|
||||||
|
|
||||||
|
while (currentDate > startDate) {
|
||||||
|
const previousDate = new Date(currentDate);
|
||||||
|
|
||||||
|
switch (interval) {
|
||||||
|
case 'month':
|
||||||
|
previousDate.setMonth(previousDate.getMonth() - 1);
|
||||||
|
break;
|
||||||
|
case 'week':
|
||||||
|
previousDate.setDate(previousDate.getDate() - 7);
|
||||||
|
// Ensure we don't go below startDate
|
||||||
|
if (previousDate < startDate) {
|
||||||
|
previousDate.setTime(startDate.getTime());
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
// day
|
||||||
|
default:
|
||||||
|
previousDate.setDate(previousDate.getDate() - 1);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
const sql = `INSERT INTO ${to}
|
||||||
|
SELECT * FROM ${from}
|
||||||
|
WHERE ${batch.column} > '${batch.transform ? batch.transform(previousDate) : formatClickhouseDate(previousDate, true)}'
|
||||||
|
AND ${batch.column} <= '${batch.transform ? batch.transform(currentDate) : formatClickhouseDate(currentDate, true)}'`;
|
||||||
|
sqls.push(sql);
|
||||||
|
|
||||||
|
currentDate = previousDate;
|
||||||
|
}
|
||||||
|
|
||||||
|
return sqls;
|
||||||
|
}
|
||||||
|
|
||||||
|
export function createMaterializedView({
|
||||||
|
name: tableName,
|
||||||
|
query,
|
||||||
|
engine = 'AggregatingMergeTree()',
|
||||||
|
orderBy,
|
||||||
|
partitionBy,
|
||||||
|
settings = {},
|
||||||
|
populate = false,
|
||||||
|
distributionHash = 'rand()',
|
||||||
|
replicatedVersion,
|
||||||
|
isClustered,
|
||||||
|
}: CreateMaterializedViewOptions): string[] {
|
||||||
|
const settingsClause = Object.entries(settings).length
|
||||||
|
? `SETTINGS ${Object.entries(settings)
|
||||||
|
.map(([key, value]) => `${key} = ${value}`)
|
||||||
|
.join(', ')}`
|
||||||
|
: '';
|
||||||
|
|
||||||
|
const partitionByClause = partitionBy ? `PARTITION BY ${partitionBy}` : '';
|
||||||
|
|
||||||
|
// Transform query to use replicated table names in clustered mode
|
||||||
|
const transformedQuery = query.replace(/\{(\w+)\}/g, (_, tableName) =>
|
||||||
|
isClustered ? replicated(tableName) : tableName,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!isClustered) {
|
||||||
|
return [
|
||||||
|
`CREATE MATERIALIZED VIEW IF NOT EXISTS ${tableName}
|
||||||
|
ENGINE = ${engine}
|
||||||
|
${partitionByClause}
|
||||||
|
ORDER BY (${orderBy.join(', ')})
|
||||||
|
${settingsClause}
|
||||||
|
${populate ? 'POPULATE' : ''}
|
||||||
|
AS ${transformedQuery}`.trim(),
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
return [
|
||||||
|
// Replicated materialized view
|
||||||
|
`CREATE MATERIALIZED VIEW IF NOT EXISTS ${replicated(tableName)} ON CLUSTER '{cluster}'
|
||||||
|
ENGINE = Replicated${engine.replace(/^(.+?)\((.+?)?\)/, `$1('${CLUSTER_REPLICA_PATH.replace('{replicatedVersion}', replicatedVersion)}', '{replica}', $2)`).replace(/, \)$/, ')')}
|
||||||
|
${partitionByClause}
|
||||||
|
ORDER BY (${orderBy.join(', ')})
|
||||||
|
${settingsClause}
|
||||||
|
${populate ? 'POPULATE' : ''}
|
||||||
|
AS ${transformedQuery}`.trim(),
|
||||||
|
// Distributed materialized view
|
||||||
|
`CREATE TABLE IF NOT EXISTS ${tableName} ON CLUSTER '{cluster}' AS ${replicated(tableName)}
|
||||||
|
ENGINE = Distributed('{cluster}', currentDatabase(), ${replicated(tableName)}, ${distributionHash})`,
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
export function countRows(tableName: string) {
|
||||||
|
return `SELECT count() FROM ${tableName}`;
|
||||||
|
}
|
||||||
|
|
||||||
|
export async function runClickhouseMigrationCommands(sqls: string[]) {
|
||||||
|
let abort: AbortController | undefined;
|
||||||
|
let activeQueryId: string | undefined;
|
||||||
|
|
||||||
|
const handleTermination = async (signal: string) => {
|
||||||
|
console.warn(
|
||||||
|
`Received ${signal}. Cleaning up active queries before exit...`,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (abort) {
|
||||||
|
abort.abort();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Create bound handler functions
|
||||||
|
const handleSigterm = () => handleTermination('SIGTERM');
|
||||||
|
const handleSigint = () => handleTermination('SIGINT');
|
||||||
|
|
||||||
|
// Register handlers
|
||||||
|
process.on('SIGTERM', handleSigterm);
|
||||||
|
process.on('SIGINT', handleSigint);
|
||||||
|
|
||||||
|
try {
|
||||||
|
for (const sql of sqls) {
|
||||||
|
abort = new AbortController();
|
||||||
|
let timer: NodeJS.Timeout | undefined;
|
||||||
|
let resolve: ((value: unknown) => void) | undefined;
|
||||||
|
activeQueryId = crypto.createHash('sha256').update(sql).digest('hex');
|
||||||
|
|
||||||
|
console.log('----------------------------------------');
|
||||||
|
console.log('---| Running query | Query ID:', activeQueryId);
|
||||||
|
console.log('---| SQL |------------------------------');
|
||||||
|
console.log(sql);
|
||||||
|
console.log('----------------------------------------');
|
||||||
|
|
||||||
|
try {
|
||||||
|
const res = await Promise.race([
|
||||||
|
chMigrationClient.command({
|
||||||
|
query: sql,
|
||||||
|
query_id: activeQueryId,
|
||||||
|
abort_signal: abort?.signal,
|
||||||
|
}),
|
||||||
|
new Promise((r) => {
|
||||||
|
resolve = r;
|
||||||
|
let checking = false; // Add flag to prevent multiple concurrent checks
|
||||||
|
|
||||||
|
async function check() {
|
||||||
|
if (checking) return; // Skip if already checking
|
||||||
|
checking = true;
|
||||||
|
|
||||||
|
try {
|
||||||
|
const res = await chMigrationClient
|
||||||
|
.query({
|
||||||
|
query: `SELECT
|
||||||
|
query_id,
|
||||||
|
elapsed,
|
||||||
|
read_rows,
|
||||||
|
written_rows,
|
||||||
|
memory_usage
|
||||||
|
FROM system.processes
|
||||||
|
WHERE query_id = '${activeQueryId}'`,
|
||||||
|
format: 'JSONEachRow',
|
||||||
|
})
|
||||||
|
.then((res) => res.json());
|
||||||
|
|
||||||
|
const formatMemory = (bytes: number) => {
|
||||||
|
const units = ['B', 'KB', 'MB', 'GB'];
|
||||||
|
let size = bytes;
|
||||||
|
let unitIndex = 0;
|
||||||
|
while (size >= 1024 && unitIndex < units.length - 1) {
|
||||||
|
size /= 1024;
|
||||||
|
unitIndex++;
|
||||||
|
}
|
||||||
|
return `${Math.round(size * 100) / 100}${units[unitIndex]}`;
|
||||||
|
};
|
||||||
|
|
||||||
|
const formatNumber = (num: number) => {
|
||||||
|
return num.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ',');
|
||||||
|
};
|
||||||
|
|
||||||
|
if (Array.isArray(res) && res.length > 0) {
|
||||||
|
const { elapsed, read_rows, written_rows, memory_usage } =
|
||||||
|
res[0] as any;
|
||||||
|
console.log(
|
||||||
|
`Progress: ${elapsed.toFixed(2)}s | Memory: ${formatMemory(memory_usage)} | Read: ${formatNumber(read_rows)} rows | Written: ${formatNumber(written_rows)} rows`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
} finally {
|
||||||
|
checking = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
timer = setTimeout(check, 5000); // Schedule next check after current one completes
|
||||||
|
}
|
||||||
|
|
||||||
|
// Start the first check after 5 seconds
|
||||||
|
timer = setTimeout(check, 5000);
|
||||||
|
}),
|
||||||
|
]);
|
||||||
|
|
||||||
|
if (timer) {
|
||||||
|
clearTimeout(timer);
|
||||||
|
}
|
||||||
|
if (resolve) {
|
||||||
|
resolve(res);
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.log('Failed on query', sql);
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
if (abort) {
|
||||||
|
abort.abort();
|
||||||
|
}
|
||||||
|
|
||||||
|
if (activeQueryId) {
|
||||||
|
try {
|
||||||
|
await chMigrationClient.command({
|
||||||
|
query: `KILL QUERY WHERE query_id = '${activeQueryId}'`,
|
||||||
|
});
|
||||||
|
console.log(`Successfully killed query ${activeQueryId}`);
|
||||||
|
} catch (err) {
|
||||||
|
console.error(`Failed to kill query ${activeQueryId}:`, err);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
throw e;
|
||||||
|
} finally {
|
||||||
|
// Clean up event listeners
|
||||||
|
process.off('SIGTERM', handleSigterm);
|
||||||
|
process.off('SIGINT', handleSigint);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -13,7 +13,7 @@ import {
|
|||||||
TABLE_NAMES,
|
TABLE_NAMES,
|
||||||
formatClickhouseDate,
|
formatClickhouseDate,
|
||||||
toDate,
|
toDate,
|
||||||
} from '../clickhouse-client';
|
} from '../clickhouse/client';
|
||||||
import { createSqlBuilder } from '../sql-builder';
|
import { createSqlBuilder } from '../sql-builder';
|
||||||
|
|
||||||
export function transformPropertyKey(property: string) {
|
export function transformPropertyKey(property: string) {
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ import {
|
|||||||
chQuery,
|
chQuery,
|
||||||
convertClickhouseDateToJs,
|
convertClickhouseDateToJs,
|
||||||
formatClickhouseDate,
|
formatClickhouseDate,
|
||||||
} from '../clickhouse-client';
|
} from '../clickhouse/client';
|
||||||
import type { EventMeta, Prisma } from '../prisma-client';
|
import type { EventMeta, Prisma } from '../prisma-client';
|
||||||
import { db } from '../prisma-client';
|
import { db } from '../prisma-client';
|
||||||
import { createSqlBuilder } from '../sql-builder';
|
import { createSqlBuilder } from '../sql-builder';
|
||||||
|
|||||||
@@ -143,6 +143,10 @@ export async function connectUserToOrganization({
|
|||||||
throw new Error('Invite not found');
|
throw new Error('Invite not found');
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (process.env.ALLOW_INVITATION === 'false') {
|
||||||
|
throw new Error('Invitations are not allowed');
|
||||||
|
}
|
||||||
|
|
||||||
if (invite.expiresAt < new Date()) {
|
if (invite.expiresAt < new Date()) {
|
||||||
throw new Error('Invite expired');
|
throw new Error('Invite expired');
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import {
|
|||||||
ch,
|
ch,
|
||||||
chQuery,
|
chQuery,
|
||||||
formatClickhouseDate,
|
formatClickhouseDate,
|
||||||
} from '../clickhouse-client';
|
} from '../clickhouse/client';
|
||||||
import { createSqlBuilder } from '../sql-builder';
|
import { createSqlBuilder } from '../sql-builder';
|
||||||
|
|
||||||
export type IProfileMetrics = {
|
export type IProfileMetrics = {
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import { escape } from 'sqlstring';
|
import { escape } from 'sqlstring';
|
||||||
|
|
||||||
import { TABLE_NAMES, chQuery } from '../clickhouse-client';
|
import { TABLE_NAMES, chQuery } from '../clickhouse/client';
|
||||||
|
|
||||||
type IGetWeekRetentionInput = {
|
type IGetWeekRetentionInput = {
|
||||||
projectId: string;
|
projectId: string;
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import { TABLE_NAMES } from './clickhouse-client';
|
import { TABLE_NAMES } from './clickhouse/client';
|
||||||
|
|
||||||
export interface SqlBuilderObject {
|
export interface SqlBuilderObject {
|
||||||
where: Record<string, string>;
|
where: Record<string, string>;
|
||||||
|
|||||||
@@ -13,11 +13,6 @@ export async function sendEmail<T extends TemplateKey>(
|
|||||||
data: z.infer<Templates[T]['schema']>;
|
data: z.infer<Templates[T]['schema']>;
|
||||||
},
|
},
|
||||||
) {
|
) {
|
||||||
if (!process.env.RESEND_API_KEY) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
const resend = new Resend(process.env.RESEND_API_KEY);
|
|
||||||
const { to, data } = options;
|
const { to, data } = options;
|
||||||
const { subject, Component, schema } = templates[template];
|
const { subject, Component, schema } = templates[template];
|
||||||
const props = schema.safeParse(data);
|
const props = schema.safeParse(data);
|
||||||
@@ -27,6 +22,14 @@ export async function sendEmail<T extends TemplateKey>(
|
|||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (!process.env.RESEND_API_KEY) {
|
||||||
|
console.log('No RESEND_API_KEY found, here is the data');
|
||||||
|
console.log(data);
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
const resend = new Resend(process.env.RESEND_API_KEY);
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const res = await resend.emails.send({
|
const res = await resend.emails.send({
|
||||||
from: FROM,
|
from: FROM,
|
||||||
|
|||||||
@@ -37,6 +37,38 @@ import {
|
|||||||
|
|
||||||
const zProvider = z.enum(['email', 'google', 'github']);
|
const zProvider = z.enum(['email', 'google', 'github']);
|
||||||
|
|
||||||
|
async function getIsRegistrationAllowed(inviteId?: string | null) {
|
||||||
|
// ALLOW_REGISTRATION is always undefined in cloud
|
||||||
|
if (process.env.ALLOW_REGISTRATION === undefined) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Self-hosting logic
|
||||||
|
// 1. First user is always allowed
|
||||||
|
const count = await db.user.count();
|
||||||
|
if (count === 0) {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. If there is an invite, check if it is valid
|
||||||
|
if (inviteId) {
|
||||||
|
if (process.env.ALLOW_INVITATION === 'false') {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
const invite = await db.invite.findUnique({
|
||||||
|
where: {
|
||||||
|
id: inviteId,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
return !!invite;
|
||||||
|
}
|
||||||
|
|
||||||
|
// 3. Otherwise, check if general registration is allowed
|
||||||
|
return process.env.ALLOW_REGISTRATION !== 'false';
|
||||||
|
}
|
||||||
|
|
||||||
export const authRouter = createTRPCRouter({
|
export const authRouter = createTRPCRouter({
|
||||||
signOut: publicProcedure.mutation(async ({ ctx }) => {
|
signOut: publicProcedure.mutation(async ({ ctx }) => {
|
||||||
deleteSessionTokenCookie(ctx.setCookie);
|
deleteSessionTokenCookie(ctx.setCookie);
|
||||||
@@ -46,7 +78,15 @@ export const authRouter = createTRPCRouter({
|
|||||||
}),
|
}),
|
||||||
signInOAuth: publicProcedure
|
signInOAuth: publicProcedure
|
||||||
.input(z.object({ provider: zProvider, inviteId: z.string().nullish() }))
|
.input(z.object({ provider: zProvider, inviteId: z.string().nullish() }))
|
||||||
.mutation(({ input, ctx }) => {
|
.mutation(async ({ input, ctx }) => {
|
||||||
|
const isRegistrationAllowed = await getIsRegistrationAllowed(
|
||||||
|
input.inviteId,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!isRegistrationAllowed) {
|
||||||
|
throw TRPCAccessError('Registrations are not allowed');
|
||||||
|
}
|
||||||
|
|
||||||
const { provider } = input;
|
const { provider } = input;
|
||||||
|
|
||||||
if (input.inviteId) {
|
if (input.inviteId) {
|
||||||
@@ -95,6 +135,14 @@ export const authRouter = createTRPCRouter({
|
|||||||
signUpEmail: publicProcedure
|
signUpEmail: publicProcedure
|
||||||
.input(zSignUpEmail)
|
.input(zSignUpEmail)
|
||||||
.mutation(async ({ input, ctx }) => {
|
.mutation(async ({ input, ctx }) => {
|
||||||
|
const isRegistrationAllowed = await getIsRegistrationAllowed(
|
||||||
|
input.inviteId,
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!isRegistrationAllowed) {
|
||||||
|
throw TRPCAccessError('Registrations are not allowed');
|
||||||
|
}
|
||||||
|
|
||||||
const provider = 'email';
|
const provider = 'email';
|
||||||
const user = await getUserAccount({
|
const user = await getUserAccount({
|
||||||
email: input.email,
|
email: input.email,
|
||||||
|
|||||||
@@ -3,6 +3,8 @@ SELF_HOSTED="true"
|
|||||||
GEO_IP_HOST="http://op-geo:8080"
|
GEO_IP_HOST="http://op-geo:8080"
|
||||||
BATCH_SIZE="5000"
|
BATCH_SIZE="5000"
|
||||||
BATCH_INTERVAL="10000"
|
BATCH_INTERVAL="10000"
|
||||||
|
ALLOW_REGISTRATION="false"
|
||||||
|
ALLOW_INVITATION="true"
|
||||||
# Will be replaced with the setup script
|
# Will be replaced with the setup script
|
||||||
REDIS_URL="$REDIS_URL"
|
REDIS_URL="$REDIS_URL"
|
||||||
CLICKHOUSE_URL="$CLICKHOUSE_URL"
|
CLICKHOUSE_URL="$CLICKHOUSE_URL"
|
||||||
|
|||||||
@@ -5,13 +5,7 @@
|
|||||||
</logger>
|
</logger>
|
||||||
|
|
||||||
<keep_alive_timeout>10</keep_alive_timeout>
|
<keep_alive_timeout>10</keep_alive_timeout>
|
||||||
<!--
|
|
||||||
Avoid the warning: "Listen [::]:9009 failed: Address family for hostname not supported".
|
|
||||||
If Docker has IPv6 disabled, bind ClickHouse to IPv4 to prevent this issue.
|
|
||||||
Add this to the configuration to ensure it listens on all IPv4 interfaces:
|
|
||||||
<listen_host>0.0.0.0</listen_host>
|
|
||||||
-->
|
|
||||||
|
|
||||||
<!-- Stop all the unnecessary logging -->
|
<!-- Stop all the unnecessary logging -->
|
||||||
<query_thread_log remove="remove"/>
|
<query_thread_log remove="remove"/>
|
||||||
<query_log remove="remove"/>
|
<query_log remove="remove"/>
|
||||||
@@ -25,29 +19,4 @@
|
|||||||
<listen_host>0.0.0.0</listen_host>
|
<listen_host>0.0.0.0</listen_host>
|
||||||
<interserver_listen_host>0.0.0.0</interserver_listen_host>
|
<interserver_listen_host>0.0.0.0</interserver_listen_host>
|
||||||
<interserver_http_host>op-ch</interserver_http_host>
|
<interserver_http_host>op-ch</interserver_http_host>
|
||||||
|
|
||||||
<macros>
|
|
||||||
<shard>1</shard>
|
|
||||||
<replica>replica1</replica>
|
|
||||||
<cluster>openpanel_cluster</cluster>
|
|
||||||
</macros>
|
|
||||||
|
|
||||||
<zookeeper>
|
|
||||||
<node index="1">
|
|
||||||
<host>op-zk</host>
|
|
||||||
<port>9181</port>
|
|
||||||
</node>
|
|
||||||
</zookeeper>
|
|
||||||
|
|
||||||
<remote_servers>
|
|
||||||
<openpanel_cluster>
|
|
||||||
<shard>
|
|
||||||
<internal_replication>true</internal_replication>
|
|
||||||
<replica>
|
|
||||||
<host>op-ch</host>
|
|
||||||
<port>9000</port>
|
|
||||||
</replica>
|
|
||||||
</shard>
|
|
||||||
</openpanel_cluster>
|
|
||||||
</remote_servers>
|
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
<clickhouse>
|
|
||||||
<logger>
|
|
||||||
<level>information</level>
|
|
||||||
<console>true</console>
|
|
||||||
</logger>
|
|
||||||
|
|
||||||
<path>/var/lib/clickhouse/</path>
|
|
||||||
<tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
|
|
||||||
|
|
||||||
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
|
||||||
|
|
||||||
<timezone>UTC</timezone>
|
|
||||||
<mlock_executable>false</mlock_executable>
|
|
||||||
|
|
||||||
<listen_host>0.0.0.0</listen_host>
|
|
||||||
<interserver_listen_host>0.0.0.0</interserver_listen_host>
|
|
||||||
<interserver_http_host>op-zk</interserver_http_host>
|
|
||||||
|
|
||||||
<keeper_server>
|
|
||||||
<tcp_port>9181</tcp_port>
|
|
||||||
<listen_host>::</listen_host>
|
|
||||||
<interserver_listen_host>::</interserver_listen_host>
|
|
||||||
<server_id>1</server_id>
|
|
||||||
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
|
|
||||||
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
|
|
||||||
|
|
||||||
<coordination_settings>
|
|
||||||
<operation_timeout_ms>10000</operation_timeout_ms>
|
|
||||||
<session_timeout_ms>30000</session_timeout_ms>
|
|
||||||
</coordination_settings>
|
|
||||||
|
|
||||||
<raft_configuration>
|
|
||||||
<server>
|
|
||||||
<id>1</id>
|
|
||||||
<hostname>op-zk</hostname>
|
|
||||||
<port>9234</port>
|
|
||||||
</server>
|
|
||||||
</raft_configuration>
|
|
||||||
</keeper_server>
|
|
||||||
|
|
||||||
<distributed_ddl>
|
|
||||||
<path>/clickhouse/production/task_queue/ddl</path>
|
|
||||||
</distributed_ddl>
|
|
||||||
</clickhouse>
|
|
||||||
@@ -65,18 +65,6 @@ services:
|
|||||||
soft: 262144
|
soft: 262144
|
||||||
hard: 262144
|
hard: 262144
|
||||||
|
|
||||||
op-zk:
|
|
||||||
image: clickhouse/clickhouse-server:24.3.2-alpine
|
|
||||||
volumes:
|
|
||||||
- op-zk-data:/var/lib/clickhouse
|
|
||||||
- ./clickhouse/clickhouse-keeper-config.xml:/etc/clickhouse-server/config.xml
|
|
||||||
command: [ 'clickhouse-keeper', '--config-file', '/etc/clickhouse-server/config.xml' ]
|
|
||||||
restart: always
|
|
||||||
ulimits:
|
|
||||||
nofile:
|
|
||||||
soft: 262144
|
|
||||||
hard: 262144
|
|
||||||
|
|
||||||
op-api:
|
op-api:
|
||||||
image: lindesvard/openpanel-api:latest
|
image: lindesvard/openpanel-api:latest
|
||||||
restart: always
|
restart: always
|
||||||
@@ -139,5 +127,3 @@ volumes:
|
|||||||
driver: local
|
driver: local
|
||||||
op-proxy-config:
|
op-proxy-config:
|
||||||
driver: local
|
driver: local
|
||||||
op-zk-data:
|
|
||||||
driver: local
|
|
||||||
|
|||||||
1102
self-hosting/package-lock.json
generated
Normal file
1102
self-hosting/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
@@ -3,7 +3,7 @@
|
|||||||
"version": "1.0.0",
|
"version": "1.0.0",
|
||||||
"description": "",
|
"description": "",
|
||||||
"scripts": {
|
"scripts": {
|
||||||
"test": "echo \"Error: no test specified\" && exit 1"
|
"quiz": "jiti quiz.ts"
|
||||||
},
|
},
|
||||||
"keywords": [],
|
"keywords": [],
|
||||||
"author": "",
|
"author": "",
|
||||||
|
|||||||
717
self-hosting/pnpm-lock.yaml
generated
717
self-hosting/pnpm-lock.yaml
generated
@@ -1,717 +0,0 @@
|
|||||||
lockfileVersion: '6.0'
|
|
||||||
|
|
||||||
settings:
|
|
||||||
autoInstallPeers: true
|
|
||||||
excludeLinksFromLockfile: false
|
|
||||||
|
|
||||||
dependencies:
|
|
||||||
'@types/inquirer':
|
|
||||||
specifier: ^9.0.7
|
|
||||||
version: 9.0.7
|
|
||||||
'@types/js-yaml':
|
|
||||||
specifier: ^4.0.9
|
|
||||||
version: 4.0.9
|
|
||||||
bcrypt:
|
|
||||||
specifier: ^5.1.1
|
|
||||||
version: 5.1.1
|
|
||||||
inquirer:
|
|
||||||
specifier: ^9.3.1
|
|
||||||
version: 9.3.6
|
|
||||||
jiti:
|
|
||||||
specifier: ^1.21.6
|
|
||||||
version: 1.21.6
|
|
||||||
js-yaml:
|
|
||||||
specifier: ^4.1.0
|
|
||||||
version: 4.1.0
|
|
||||||
|
|
||||||
devDependencies:
|
|
||||||
'@types/bcrypt':
|
|
||||||
specifier: ^5.0.2
|
|
||||||
version: 5.0.2
|
|
||||||
|
|
||||||
packages:
|
|
||||||
|
|
||||||
/@inquirer/figures@1.0.6:
|
|
||||||
resolution: {integrity: sha512-yfZzps3Cso2UbM7WlxKwZQh2Hs6plrbjs1QnzQDZhK2DgyCo6D8AaHps9olkNcUFlcYERMqU3uJSp1gmy3s/qQ==}
|
|
||||||
engines: {node: '>=18'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/@mapbox/node-pre-gyp@1.0.11:
|
|
||||||
resolution: {integrity: sha512-Yhlar6v9WQgUp/He7BdgzOz8lqMQ8sU+jkCq7Wx8Myc5YFJLbEe7lgui/V7G1qB1DJykHSGwreceSaD60Y0PUQ==}
|
|
||||||
hasBin: true
|
|
||||||
dependencies:
|
|
||||||
detect-libc: 2.0.3
|
|
||||||
https-proxy-agent: 5.0.1
|
|
||||||
make-dir: 3.1.0
|
|
||||||
node-fetch: 2.7.0
|
|
||||||
nopt: 5.0.0
|
|
||||||
npmlog: 5.0.1
|
|
||||||
rimraf: 3.0.2
|
|
||||||
semver: 7.6.3
|
|
||||||
tar: 6.2.1
|
|
||||||
transitivePeerDependencies:
|
|
||||||
- encoding
|
|
||||||
- supports-color
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/@types/bcrypt@5.0.2:
|
|
||||||
resolution: {integrity: sha512-6atioO8Y75fNcbmj0G7UjI9lXN2pQ/IGJ2FWT4a/btd0Lk9lQalHLKhkgKVZ3r+spnmWUKfbMi1GEe9wyHQfNQ==}
|
|
||||||
dependencies:
|
|
||||||
'@types/node': 22.5.5
|
|
||||||
dev: true
|
|
||||||
|
|
||||||
/@types/inquirer@9.0.7:
|
|
||||||
resolution: {integrity: sha512-Q0zyBupO6NxGRZut/JdmqYKOnN95Eg5V8Csg3PGKkP+FnvsUZx1jAyK7fztIszxxMuoBA6E3KXWvdZVXIpx60g==}
|
|
||||||
dependencies:
|
|
||||||
'@types/through': 0.0.33
|
|
||||||
rxjs: 7.8.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/@types/js-yaml@4.0.9:
|
|
||||||
resolution: {integrity: sha512-k4MGaQl5TGo/iipqb2UDG2UwjXziSWkh0uysQelTlJpX1qGlpUZYm8PnO4DxG1qBomtJUdYJ6qR6xdIah10JLg==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/@types/node@22.5.5:
|
|
||||||
resolution: {integrity: sha512-Xjs4y5UPO/CLdzpgR6GirZJx36yScjh73+2NlLlkFRSoQN8B0DpfXPdZGnvVmLRLOsqDpOfTNv7D9trgGhmOIA==}
|
|
||||||
dependencies:
|
|
||||||
undici-types: 6.19.8
|
|
||||||
|
|
||||||
/@types/through@0.0.33:
|
|
||||||
resolution: {integrity: sha512-HsJ+z3QuETzP3cswwtzt2vEIiHBk/dCcHGhbmG5X3ecnwFD/lPrMpliGXxSCg03L9AhrdwA4Oz/qfspkDW+xGQ==}
|
|
||||||
dependencies:
|
|
||||||
'@types/node': 22.5.5
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/abbrev@1.1.1:
|
|
||||||
resolution: {integrity: sha512-nne9/IiQ/hzIhY6pdDnbBtz7DjPTKrY00P/zvPSm5pOFkl6xuGrGnXn/VtTNNfNtAfZ9/1RtehkszU9qcTii0Q==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/agent-base@6.0.2:
|
|
||||||
resolution: {integrity: sha512-RZNwNclF7+MS/8bDg70amg32dyeZGZxiDuQmZxKLAlQjr3jGyLx+4Kkk58UO7D2QdgFIQCovuSuZESne6RG6XQ==}
|
|
||||||
engines: {node: '>= 6.0.0'}
|
|
||||||
dependencies:
|
|
||||||
debug: 4.3.7
|
|
||||||
transitivePeerDependencies:
|
|
||||||
- supports-color
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/ansi-escapes@4.3.2:
|
|
||||||
resolution: {integrity: sha512-gKXj5ALrKWQLsYG9jlTRmR/xKluxHV+Z9QEwNIgCfM1/uwPMCuzVVnh5mwTd+OuBZcwSIMbqssNWRm1lE51QaQ==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
type-fest: 0.21.3
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/ansi-regex@5.0.1:
|
|
||||||
resolution: {integrity: sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/ansi-styles@4.3.0:
|
|
||||||
resolution: {integrity: sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
color-convert: 2.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/aproba@2.0.0:
|
|
||||||
resolution: {integrity: sha512-lYe4Gx7QT+MKGbDsA+Z+he/Wtef0BiwDOlK/XkBrdfsh9J/jPPXbX0tE9x9cl27Tmu5gg3QUbUrQYa/y+KOHPQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/are-we-there-yet@2.0.0:
|
|
||||||
resolution: {integrity: sha512-Ci/qENmwHnsYo9xKIcUJN5LeDKdJ6R1Z1j9V/J5wyq8nh/mYPEpIKJbBZXtZjG04HiK7zV/p6Vs9952MrMeUIw==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
deprecated: This package is no longer supported.
|
|
||||||
dependencies:
|
|
||||||
delegates: 1.0.0
|
|
||||||
readable-stream: 3.6.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/argparse@2.0.1:
|
|
||||||
resolution: {integrity: sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/balanced-match@1.0.2:
|
|
||||||
resolution: {integrity: sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/base64-js@1.5.1:
|
|
||||||
resolution: {integrity: sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/bcrypt@5.1.1:
|
|
||||||
resolution: {integrity: sha512-AGBHOG5hPYZ5Xl9KXzU5iKq9516yEmvCKDg3ecP5kX2aB6UqTeXZxk2ELnDgDm6BQSMlLt9rDB4LoSMx0rYwww==}
|
|
||||||
engines: {node: '>= 10.0.0'}
|
|
||||||
requiresBuild: true
|
|
||||||
dependencies:
|
|
||||||
'@mapbox/node-pre-gyp': 1.0.11
|
|
||||||
node-addon-api: 5.1.0
|
|
||||||
transitivePeerDependencies:
|
|
||||||
- encoding
|
|
||||||
- supports-color
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/bl@4.1.0:
|
|
||||||
resolution: {integrity: sha512-1W07cM9gS6DcLperZfFSj+bWLtaPGSOHWhPiGzXmvVJbRLdG82sH/Kn8EtW1VqWVA54AKf2h5k5BbnIbwF3h6w==}
|
|
||||||
dependencies:
|
|
||||||
buffer: 5.7.1
|
|
||||||
inherits: 2.0.4
|
|
||||||
readable-stream: 3.6.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/brace-expansion@1.1.11:
|
|
||||||
resolution: {integrity: sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==}
|
|
||||||
dependencies:
|
|
||||||
balanced-match: 1.0.2
|
|
||||||
concat-map: 0.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/buffer@5.7.1:
|
|
||||||
resolution: {integrity: sha512-EHcyIPBQ4BSGlvjB16k5KgAJ27CIsHY/2JBmCRReo48y9rQ3MaUzWX3KVlBa4U7MyX02HdVj0K7C3WaB3ju7FQ==}
|
|
||||||
dependencies:
|
|
||||||
base64-js: 1.5.1
|
|
||||||
ieee754: 1.2.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/chalk@4.1.2:
|
|
||||||
resolution: {integrity: sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
dependencies:
|
|
||||||
ansi-styles: 4.3.0
|
|
||||||
supports-color: 7.2.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/chardet@0.7.0:
|
|
||||||
resolution: {integrity: sha512-mT8iDcrh03qDGRRmoA2hmBJnxpllMR+0/0qlzjqZES6NdiWDcZkCNAk4rPFZ9Q85r27unkiNNg8ZOiwZXBHwcA==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/chownr@2.0.0:
|
|
||||||
resolution: {integrity: sha512-bIomtDF5KGpdogkLd9VspvFzk9KfpyyGlS8YFVZl7TGPBHL5snIOnxeshwVgPteQ9b4Eydl+pVbIyE1DcvCWgQ==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/cli-cursor@3.1.0:
|
|
||||||
resolution: {integrity: sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
restore-cursor: 3.1.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/cli-spinners@2.9.2:
|
|
||||||
resolution: {integrity: sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==}
|
|
||||||
engines: {node: '>=6'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/cli-width@4.1.0:
|
|
||||||
resolution: {integrity: sha512-ouuZd4/dm2Sw5Gmqy6bGyNNNe1qt9RpmxveLSO7KcgsTnU7RXfsw+/bukWGo1abgBiMAic068rclZsO4IWmmxQ==}
|
|
||||||
engines: {node: '>= 12'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/clone@1.0.4:
|
|
||||||
resolution: {integrity: sha512-JQHZ2QMW6l3aH/j6xCqQThY/9OH4D/9ls34cgkUBiEeocRTU04tHfKPBsUK1PqZCUQM7GiA0IIXJSuXHI64Kbg==}
|
|
||||||
engines: {node: '>=0.8'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/color-convert@2.0.1:
|
|
||||||
resolution: {integrity: sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==}
|
|
||||||
engines: {node: '>=7.0.0'}
|
|
||||||
dependencies:
|
|
||||||
color-name: 1.1.4
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/color-name@1.1.4:
|
|
||||||
resolution: {integrity: sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/color-support@1.1.3:
|
|
||||||
resolution: {integrity: sha512-qiBjkpbMLO/HL68y+lh4q0/O1MZFj2RX6X/KmMa3+gJD3z+WwI1ZzDHysvqHGS3mP6mznPckpXmw1nI9cJjyRg==}
|
|
||||||
hasBin: true
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/concat-map@0.0.1:
|
|
||||||
resolution: {integrity: sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/console-control-strings@1.1.0:
|
|
||||||
resolution: {integrity: sha512-ty/fTekppD2fIwRvnZAVdeOiGd1c7YXEixbgJTNzqcxJWKQnjJ/V1bNEEE6hygpM3WjwHFUVK6HTjWSzV4a8sQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/debug@4.3.7:
|
|
||||||
resolution: {integrity: sha512-Er2nc/H7RrMXZBFCEim6TCmMk02Z8vLC2Rbi1KEBggpo0fS6l0S1nnapwmIi3yW/+GOJap1Krg4w0Hg80oCqgQ==}
|
|
||||||
engines: {node: '>=6.0'}
|
|
||||||
peerDependencies:
|
|
||||||
supports-color: '*'
|
|
||||||
peerDependenciesMeta:
|
|
||||||
supports-color:
|
|
||||||
optional: true
|
|
||||||
dependencies:
|
|
||||||
ms: 2.1.3
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/defaults@1.0.4:
|
|
||||||
resolution: {integrity: sha512-eFuaLoy/Rxalv2kr+lqMlUnrDWV+3j4pljOIJgLIhI058IQfWJ7vXhyEIHu+HtC738klGALYxOKDO0bQP3tg8A==}
|
|
||||||
dependencies:
|
|
||||||
clone: 1.0.4
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/delegates@1.0.0:
|
|
||||||
resolution: {integrity: sha512-bd2L678uiWATM6m5Z1VzNCErI3jiGzt6HGY8OVICs40JQq/HALfbyNJmp0UDakEY4pMMaN0Ly5om/B1VI/+xfQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/detect-libc@2.0.3:
|
|
||||||
resolution: {integrity: sha512-bwy0MGW55bG41VqxxypOsdSdGqLwXPI/focwgTYCFMbdUiBAxLg9CFzG08sz2aqzknwiX7Hkl0bQENjg8iLByw==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/emoji-regex@8.0.0:
|
|
||||||
resolution: {integrity: sha512-MSjYzcWNOA0ewAHpz0MxpYFvwg6yjy1NG3xteoqz644VCo/RPgnr1/GGt+ic3iJTzQ8Eu3TdM14SawnVUmGE6A==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/external-editor@3.1.0:
|
|
||||||
resolution: {integrity: sha512-hMQ4CX1p1izmuLYyZqLMO/qGNw10wSv9QDCPfzXfyFrOaCSSoRfqE1Kf1s5an66J5JZC62NewG+mK49jOCtQew==}
|
|
||||||
engines: {node: '>=4'}
|
|
||||||
dependencies:
|
|
||||||
chardet: 0.7.0
|
|
||||||
iconv-lite: 0.4.24
|
|
||||||
tmp: 0.0.33
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/fs-minipass@2.1.0:
|
|
||||||
resolution: {integrity: sha512-V/JgOLFCS+R6Vcq0slCuaeWEdNC3ouDlJMNIsacH2VtALiu9mV4LPrHc5cDl8k5aw6J8jwgWWpiTo5RYhmIzvg==}
|
|
||||||
engines: {node: '>= 8'}
|
|
||||||
dependencies:
|
|
||||||
minipass: 3.3.6
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/fs.realpath@1.0.0:
|
|
||||||
resolution: {integrity: sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/gauge@3.0.2:
|
|
||||||
resolution: {integrity: sha512-+5J6MS/5XksCuXq++uFRsnUd7Ovu1XenbeuIuNRJxYWjgQbPuFhT14lAvsWfqfAmnwluf1OwMjz39HjfLPci0Q==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
deprecated: This package is no longer supported.
|
|
||||||
dependencies:
|
|
||||||
aproba: 2.0.0
|
|
||||||
color-support: 1.1.3
|
|
||||||
console-control-strings: 1.1.0
|
|
||||||
has-unicode: 2.0.1
|
|
||||||
object-assign: 4.1.1
|
|
||||||
signal-exit: 3.0.7
|
|
||||||
string-width: 4.2.3
|
|
||||||
strip-ansi: 6.0.1
|
|
||||||
wide-align: 1.1.5
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/glob@7.2.3:
|
|
||||||
resolution: {integrity: sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==}
|
|
||||||
deprecated: Glob versions prior to v9 are no longer supported
|
|
||||||
dependencies:
|
|
||||||
fs.realpath: 1.0.0
|
|
||||||
inflight: 1.0.6
|
|
||||||
inherits: 2.0.4
|
|
||||||
minimatch: 3.1.2
|
|
||||||
once: 1.4.0
|
|
||||||
path-is-absolute: 1.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/has-flag@4.0.0:
|
|
||||||
resolution: {integrity: sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/has-unicode@2.0.1:
|
|
||||||
resolution: {integrity: sha512-8Rf9Y83NBReMnx0gFzA8JImQACstCYWUplepDa9xprwwtmgEZUF0h/i5xSA625zB/I37EtrswSST6OXxwaaIJQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/https-proxy-agent@5.0.1:
|
|
||||||
resolution: {integrity: sha512-dFcAjpTQFgoLMzC2VwU+C/CbS7uRL0lWmxDITmqm7C+7F0Odmj6s9l6alZc6AELXhrnggM2CeWSXHGOdX2YtwA==}
|
|
||||||
engines: {node: '>= 6'}
|
|
||||||
dependencies:
|
|
||||||
agent-base: 6.0.2
|
|
||||||
debug: 4.3.7
|
|
||||||
transitivePeerDependencies:
|
|
||||||
- supports-color
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/iconv-lite@0.4.24:
|
|
||||||
resolution: {integrity: sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==}
|
|
||||||
engines: {node: '>=0.10.0'}
|
|
||||||
dependencies:
|
|
||||||
safer-buffer: 2.1.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/ieee754@1.2.1:
|
|
||||||
resolution: {integrity: sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/inflight@1.0.6:
|
|
||||||
resolution: {integrity: sha512-k92I/b08q4wvFscXCLvqfsHCrjrF7yiXsQuIVvVE7N82W3+aqpzuUdBbfhWcy/FZR3/4IgflMgKLOsvPDrGCJA==}
|
|
||||||
deprecated: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
|
|
||||||
dependencies:
|
|
||||||
once: 1.4.0
|
|
||||||
wrappy: 1.0.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/inherits@2.0.4:
|
|
||||||
resolution: {integrity: sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/inquirer@9.3.6:
|
|
||||||
resolution: {integrity: sha512-riK/iQB2ctwkpWYgjjWIRv3MBLt2gzb2Sj0JNQNbyTXgyXsLWcDPJ5WS5ZDTCx7BRFnJsARtYh+58fjP5M2Y0Q==}
|
|
||||||
engines: {node: '>=18'}
|
|
||||||
dependencies:
|
|
||||||
'@inquirer/figures': 1.0.6
|
|
||||||
ansi-escapes: 4.3.2
|
|
||||||
cli-width: 4.1.0
|
|
||||||
external-editor: 3.1.0
|
|
||||||
mute-stream: 1.0.0
|
|
||||||
ora: 5.4.1
|
|
||||||
run-async: 3.0.0
|
|
||||||
rxjs: 7.8.1
|
|
||||||
string-width: 4.2.3
|
|
||||||
strip-ansi: 6.0.1
|
|
||||||
wrap-ansi: 6.2.0
|
|
||||||
yoctocolors-cjs: 2.1.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/is-fullwidth-code-point@3.0.0:
|
|
||||||
resolution: {integrity: sha512-zymm5+u+sCsSWyD9qNaejV3DFvhCKclKdizYaJUuHA83RLjb7nSuGnddCHGv0hk+KY7BMAlsWeK4Ueg6EV6XQg==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/is-interactive@1.0.0:
|
|
||||||
resolution: {integrity: sha512-2HvIEKRoqS62guEC+qBjpvRubdX910WCMuJTZ+I9yvqKU2/12eSL549HMwtabb4oupdj2sMP50k+XJfB/8JE6w==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/is-unicode-supported@0.1.0:
|
|
||||||
resolution: {integrity: sha512-knxG2q4UC3u8stRGyAVJCOdxFmv5DZiRcdlIaAQXAbSfJya+OhopNotLQrstBhququ4ZpuKbDc/8S6mgXgPFPw==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/jiti@1.21.6:
|
|
||||||
resolution: {integrity: sha512-2yTgeWTWzMWkHu6Jp9NKgePDaYHbntiwvYuuJLbbN9vl7DC9DvXKOB2BC3ZZ92D3cvV/aflH0osDfwpHepQ53w==}
|
|
||||||
hasBin: true
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/js-yaml@4.1.0:
|
|
||||||
resolution: {integrity: sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==}
|
|
||||||
hasBin: true
|
|
||||||
dependencies:
|
|
||||||
argparse: 2.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/log-symbols@4.1.0:
|
|
||||||
resolution: {integrity: sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
dependencies:
|
|
||||||
chalk: 4.1.2
|
|
||||||
is-unicode-supported: 0.1.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/make-dir@3.1.0:
|
|
||||||
resolution: {integrity: sha512-g3FeP20LNwhALb/6Cz6Dd4F2ngze0jz7tbzrD2wAV+o9FeNHe4rL+yK2md0J/fiSf1sa1ADhXqi5+oVwOM/eGw==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
semver: 6.3.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/mimic-fn@2.1.0:
|
|
||||||
resolution: {integrity: sha512-OqbOk5oEQeAZ8WXWydlu9HJjz9WVdEIvamMCcXmuqUYjTknH/sqsWvhQ3vgwKFRR1HpjvNBKQ37nbJgYzGqGcg==}
|
|
||||||
engines: {node: '>=6'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/minimatch@3.1.2:
|
|
||||||
resolution: {integrity: sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==}
|
|
||||||
dependencies:
|
|
||||||
brace-expansion: 1.1.11
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/minipass@3.3.6:
|
|
||||||
resolution: {integrity: sha512-DxiNidxSEK+tHG6zOIklvNOwm3hvCrbUrdtzY74U6HKTJxvIDfOUL5W5P2Ghd3DTkhhKPYGqeNUIh5qcM4YBfw==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
yallist: 4.0.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/minipass@5.0.0:
|
|
||||||
resolution: {integrity: sha512-3FnjYuehv9k6ovOEbyOswadCDPX1piCfhV8ncmYtHOjuPwylVWsghTLo7rabjC3Rx5xD4HDx8Wm1xnMF7S5qFQ==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/minizlib@2.1.2:
|
|
||||||
resolution: {integrity: sha512-bAxsR8BVfj60DWXHE3u30oHzfl4G7khkSuPW+qvpd7jFRHm7dLxOjUk1EHACJ/hxLY8phGJ0YhYHZo7jil7Qdg==}
|
|
||||||
engines: {node: '>= 8'}
|
|
||||||
dependencies:
|
|
||||||
minipass: 3.3.6
|
|
||||||
yallist: 4.0.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/mkdirp@1.0.4:
|
|
||||||
resolution: {integrity: sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
hasBin: true
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/ms@2.1.3:
|
|
||||||
resolution: {integrity: sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/mute-stream@1.0.0:
|
|
||||||
resolution: {integrity: sha512-avsJQhyd+680gKXyG/sQc0nXaC6rBkPOfyHYcFb9+hdkqQkR9bdnkJ0AMZhke0oesPqIO+mFFJ+IdBc7mst4IA==}
|
|
||||||
engines: {node: ^14.17.0 || ^16.13.0 || >=18.0.0}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/node-addon-api@5.1.0:
|
|
||||||
resolution: {integrity: sha512-eh0GgfEkpnoWDq+VY8OyvYhFEzBk6jIYbRKdIlyTiAXIVJ8PyBaKb0rp7oDtoddbdoHWhq8wwr+XZ81F1rpNdA==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/node-fetch@2.7.0:
|
|
||||||
resolution: {integrity: sha512-c4FRfUm/dbcWZ7U+1Wq0AwCyFL+3nt2bEw05wfxSz+DWpWsitgmSgYmy2dQdWyKC1694ELPqMs/YzUSNozLt8A==}
|
|
||||||
engines: {node: 4.x || >=6.0.0}
|
|
||||||
peerDependencies:
|
|
||||||
encoding: ^0.1.0
|
|
||||||
peerDependenciesMeta:
|
|
||||||
encoding:
|
|
||||||
optional: true
|
|
||||||
dependencies:
|
|
||||||
whatwg-url: 5.0.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/nopt@5.0.0:
|
|
||||||
resolution: {integrity: sha512-Tbj67rffqceeLpcRXrT7vKAN8CwfPeIBgM7E6iBkmKLV7bEMwpGgYLGv0jACUsECaa/vuxP0IjEont6umdMgtQ==}
|
|
||||||
engines: {node: '>=6'}
|
|
||||||
hasBin: true
|
|
||||||
dependencies:
|
|
||||||
abbrev: 1.1.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/npmlog@5.0.1:
|
|
||||||
resolution: {integrity: sha512-AqZtDUWOMKs1G/8lwylVjrdYgqA4d9nu8hc+0gzRxlDb1I10+FHBGMXs6aiQHFdCUUlqH99MUMuLfzWDNDtfxw==}
|
|
||||||
deprecated: This package is no longer supported.
|
|
||||||
dependencies:
|
|
||||||
are-we-there-yet: 2.0.0
|
|
||||||
console-control-strings: 1.1.0
|
|
||||||
gauge: 3.0.2
|
|
||||||
set-blocking: 2.0.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/object-assign@4.1.1:
|
|
||||||
resolution: {integrity: sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==}
|
|
||||||
engines: {node: '>=0.10.0'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/once@1.4.0:
|
|
||||||
resolution: {integrity: sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==}
|
|
||||||
dependencies:
|
|
||||||
wrappy: 1.0.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/onetime@5.1.2:
|
|
||||||
resolution: {integrity: sha512-kbpaSSGJTWdAY5KPVeMOKXSrPtr8C8C7wodJbcsd51jRnmD+GZu8Y0VoU6Dm5Z4vWr0Ig/1NKuWRKf7j5aaYSg==}
|
|
||||||
engines: {node: '>=6'}
|
|
||||||
dependencies:
|
|
||||||
mimic-fn: 2.1.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/ora@5.4.1:
|
|
||||||
resolution: {integrity: sha512-5b6Y85tPxZZ7QytO+BQzysW31HJku27cRIlkbAXaNx+BdcVi+LlRFmVXzeF6a7JCwJpyw5c4b+YSVImQIrBpuQ==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
dependencies:
|
|
||||||
bl: 4.1.0
|
|
||||||
chalk: 4.1.2
|
|
||||||
cli-cursor: 3.1.0
|
|
||||||
cli-spinners: 2.9.2
|
|
||||||
is-interactive: 1.0.0
|
|
||||||
is-unicode-supported: 0.1.0
|
|
||||||
log-symbols: 4.1.0
|
|
||||||
strip-ansi: 6.0.1
|
|
||||||
wcwidth: 1.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/os-tmpdir@1.0.2:
|
|
||||||
resolution: {integrity: sha512-D2FR03Vir7FIu45XBY20mTb+/ZSWB00sjU9jdQXt83gDrI4Ztz5Fs7/yy74g2N5SVQY4xY1qDr4rNddwYRVX0g==}
|
|
||||||
engines: {node: '>=0.10.0'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/path-is-absolute@1.0.1:
|
|
||||||
resolution: {integrity: sha512-AVbw3UJ2e9bq64vSaS9Am0fje1Pa8pbGqTTsmXfaIiMpnr5DlDhfJOuLj9Sf95ZPVDAUerDfEk88MPmPe7UCQg==}
|
|
||||||
engines: {node: '>=0.10.0'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/readable-stream@3.6.2:
|
|
||||||
resolution: {integrity: sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA==}
|
|
||||||
engines: {node: '>= 6'}
|
|
||||||
dependencies:
|
|
||||||
inherits: 2.0.4
|
|
||||||
string_decoder: 1.3.0
|
|
||||||
util-deprecate: 1.0.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/restore-cursor@3.1.0:
|
|
||||||
resolution: {integrity: sha512-l+sSefzHpj5qimhFSE5a8nufZYAM3sBSVMAPtYkmC+4EH2anSGaEMXSD0izRQbu9nfyQ9y5JrVmp7E8oZrUjvA==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
onetime: 5.1.2
|
|
||||||
signal-exit: 3.0.7
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/rimraf@3.0.2:
|
|
||||||
resolution: {integrity: sha512-JZkJMZkAGFFPP2YqXZXPbMlMBgsxzE8ILs4lMIX/2o0L9UBw9O/Y3o6wFw/i9YLapcUJWwqbi3kdxIPdC62TIA==}
|
|
||||||
deprecated: Rimraf versions prior to v4 are no longer supported
|
|
||||||
hasBin: true
|
|
||||||
dependencies:
|
|
||||||
glob: 7.2.3
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/run-async@3.0.0:
|
|
||||||
resolution: {integrity: sha512-540WwVDOMxA6dN6We19EcT9sc3hkXPw5mzRNGM3FkdN/vtE9NFvj5lFAPNwUDmJjXidm3v7TC1cTE7t17Ulm1Q==}
|
|
||||||
engines: {node: '>=0.12.0'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/rxjs@7.8.1:
|
|
||||||
resolution: {integrity: sha512-AA3TVj+0A2iuIoQkWEK/tqFjBq2j+6PO6Y0zJcvzLAFhEFIO3HL0vls9hWLncZbAAbK0mar7oZ4V079I/qPMxg==}
|
|
||||||
dependencies:
|
|
||||||
tslib: 2.7.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/safe-buffer@5.2.1:
|
|
||||||
resolution: {integrity: sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/safer-buffer@2.1.2:
|
|
||||||
resolution: {integrity: sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/semver@6.3.1:
|
|
||||||
resolution: {integrity: sha512-BR7VvDCVHO+q2xBEWskxS6DJE1qRnb7DxzUrogb71CWoSficBxYsiAGd+Kl0mmq/MprG9yArRkyrQxTO6XjMzA==}
|
|
||||||
hasBin: true
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/semver@7.6.3:
|
|
||||||
resolution: {integrity: sha512-oVekP1cKtI+CTDvHWYFUcMtsK/00wmAEfyqKfNdARm8u1wNVhSgaX7A8d4UuIlUI5e84iEwOhs7ZPYRmzU9U6A==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
hasBin: true
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/set-blocking@2.0.0:
|
|
||||||
resolution: {integrity: sha512-KiKBS8AnWGEyLzofFfmvKwpdPzqiy16LvQfK3yv/fVH7Bj13/wl3JSR1J+rfgRE9q7xUJK4qvgS8raSOeLUehw==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/signal-exit@3.0.7:
|
|
||||||
resolution: {integrity: sha512-wnD2ZE+l+SPC/uoS0vXeE9L1+0wuaMqKlfz9AMUo38JsyLSBWSFcHR1Rri62LZc12vLr1gb3jl7iwQhgwpAbGQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/string-width@4.2.3:
|
|
||||||
resolution: {integrity: sha512-wKyQRQpjJ0sIp62ErSZdGsjMJWsap5oRNihHhu6G7JVO/9jIB6UyevL+tXuOqrng8j/cxKTWyWUwvSTriiZz/g==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
emoji-regex: 8.0.0
|
|
||||||
is-fullwidth-code-point: 3.0.0
|
|
||||||
strip-ansi: 6.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/string_decoder@1.3.0:
|
|
||||||
resolution: {integrity: sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA==}
|
|
||||||
dependencies:
|
|
||||||
safe-buffer: 5.2.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/strip-ansi@6.0.1:
|
|
||||||
resolution: {integrity: sha512-Y38VPSHcqkFrCpFnQ9vuSXmquuv5oXOKpGeT6aGrr3o3Gc9AlVa6JBfUSOCnbxGGZF+/0ooI7KrPuUSztUdU5A==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
ansi-regex: 5.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/supports-color@7.2.0:
|
|
||||||
resolution: {integrity: sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
has-flag: 4.0.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/tar@6.2.1:
|
|
||||||
resolution: {integrity: sha512-DZ4yORTwrbTj/7MZYq2w+/ZFdI6OZ/f9SFHR+71gIVUZhOQPHzVCLpvRnPgyaMpfWxxk/4ONva3GQSyNIKRv6A==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
dependencies:
|
|
||||||
chownr: 2.0.0
|
|
||||||
fs-minipass: 2.1.0
|
|
||||||
minipass: 5.0.0
|
|
||||||
minizlib: 2.1.2
|
|
||||||
mkdirp: 1.0.4
|
|
||||||
yallist: 4.0.0
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/tmp@0.0.33:
|
|
||||||
resolution: {integrity: sha512-jRCJlojKnZ3addtTOjdIqoRuPEKBvNXcGYqzO6zWZX8KfKEpnGY5jfggJQ3EjKuu8D4bJRr0y+cYJFmYbImXGw==}
|
|
||||||
engines: {node: '>=0.6.0'}
|
|
||||||
dependencies:
|
|
||||||
os-tmpdir: 1.0.2
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/tr46@0.0.3:
|
|
||||||
resolution: {integrity: sha512-N3WMsuqV66lT30CrXNbEjx4GEwlow3v6rr4mCcv6prnfwhS01rkgyFdjPNBYd9br7LpXV1+Emh01fHnq2Gdgrw==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/tslib@2.7.0:
|
|
||||||
resolution: {integrity: sha512-gLXCKdN1/j47AiHiOkJN69hJmcbGTHI0ImLmbYLHykhgeN0jVGola9yVjFgzCUklsZQMW55o+dW7IXv3RCXDzA==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/type-fest@0.21.3:
|
|
||||||
resolution: {integrity: sha512-t0rzBq87m3fVcduHDUFhKmyyX+9eo6WQjZvf51Ea/M0Q7+T374Jp1aUiyUl0GKxp8M/OETVHSDvmkyPgvX+X2w==}
|
|
||||||
engines: {node: '>=10'}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/undici-types@6.19.8:
|
|
||||||
resolution: {integrity: sha512-ve2KP6f/JnbPBFyobGHuerC9g1FYGn/F8n1LWTwNxCEzd6IfqTwUQcNXgEtmmQ6DlRrC1hrSrBnCZPokRrDHjw==}
|
|
||||||
|
|
||||||
/util-deprecate@1.0.2:
|
|
||||||
resolution: {integrity: sha512-EPD5q1uXyFxJpCrLnCc1nHnq3gOa6DZBocAIiI2TaSCA7VCJ1UJDMagCzIkXNsUYfD1daK//LTEQ8xiIbrHtcw==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/wcwidth@1.0.1:
|
|
||||||
resolution: {integrity: sha512-XHPEwS0q6TaxcvG85+8EYkbiCux2XtWG2mkc47Ng2A77BQu9+DqIOJldST4HgPkuea7dvKSj5VgX3P1d4rW8Tg==}
|
|
||||||
dependencies:
|
|
||||||
defaults: 1.0.4
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/webidl-conversions@3.0.1:
|
|
||||||
resolution: {integrity: sha512-2JAn3z8AR6rjK8Sm8orRC0h/bcl/DqL7tRPdGZ4I1CjdF+EaMLmYxBHyXuKL849eucPFhvBoxMsflfOb8kxaeQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/whatwg-url@5.0.0:
|
|
||||||
resolution: {integrity: sha512-saE57nupxk6v3HY35+jzBwYa0rKSy0XR8JSxZPwgLr7ys0IBzhGviA1/TUGJLmSVqs8pb9AnvICXEuOHLprYTw==}
|
|
||||||
dependencies:
|
|
||||||
tr46: 0.0.3
|
|
||||||
webidl-conversions: 3.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/wide-align@1.1.5:
|
|
||||||
resolution: {integrity: sha512-eDMORYaPNZ4sQIuuYPDHdQvf4gyCF9rEEV/yPxGfwPkRodwEgiMUUXTx/dex+Me0wxx53S+NgUHaP7y3MGlDmg==}
|
|
||||||
dependencies:
|
|
||||||
string-width: 4.2.3
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/wrap-ansi@6.2.0:
|
|
||||||
resolution: {integrity: sha512-r6lPcBGxZXlIcymEu7InxDMhdW0KDxpLgoFLcguasxCaJ/SOIZwINatK9KY/tf+ZrlywOKU0UDj3ATXUBfxJXA==}
|
|
||||||
engines: {node: '>=8'}
|
|
||||||
dependencies:
|
|
||||||
ansi-styles: 4.3.0
|
|
||||||
string-width: 4.2.3
|
|
||||||
strip-ansi: 6.0.1
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/wrappy@1.0.2:
|
|
||||||
resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/yallist@4.0.0:
|
|
||||||
resolution: {integrity: sha512-3wdGidZyq5PB084XLES5TpOSRA3wjXAlIWMhum2kRcv/41Sn2emQ0dycQW4uZXLejwKvg6EsvbdlVL+FYEct7A==}
|
|
||||||
dev: false
|
|
||||||
|
|
||||||
/yoctocolors-cjs@2.1.2:
|
|
||||||
resolution: {integrity: sha512-cYVsTjKl8b+FrnidjibDWskAv7UKOfcwaVZdp/it9n1s9fU3IkgDbhdIRKCW4JDsAlECJY0ytoVPT3sK6kideA==}
|
|
||||||
engines: {node: '>=18'}
|
|
||||||
dev: false
|
|
||||||
@@ -274,8 +274,9 @@ async function initiateOnboarding() {
|
|||||||
{
|
{
|
||||||
type: 'input',
|
type: 'input',
|
||||||
name: 'CPUS',
|
name: 'CPUS',
|
||||||
default: os.cpus().length,
|
default: Math.max(Math.floor(os.cpus().length / 2), 1),
|
||||||
message: 'How many CPUs do you have?',
|
message:
|
||||||
|
'How many workers do you want to spawn (in many cases 1-2 is enough)?',
|
||||||
validate: (value) => {
|
validate: (value) => {
|
||||||
const parsed = Number.parseInt(value, 10);
|
const parsed = Number.parseInt(value, 10);
|
||||||
|
|
||||||
@@ -364,6 +365,7 @@ async function initiateOnboarding() {
|
|||||||
'\t- ./stop (example: ./stop)',
|
'\t- ./stop (example: ./stop)',
|
||||||
'\t- ./logs (example: ./logs)',
|
'\t- ./logs (example: ./logs)',
|
||||||
'\t- ./rebuild (example: ./rebuild op-dashboard)',
|
'\t- ./rebuild (example: ./rebuild op-dashboard)',
|
||||||
|
'\t- ./update (example: ./update) pulls the latest docker images and restarts the service',
|
||||||
'',
|
'',
|
||||||
'2. Danger zone!',
|
'2. Danger zone!',
|
||||||
'\t- ./danger_wipe_everything (example: ./danger_wipe_everything)',
|
'\t- ./danger_wipe_everything (example: ./danger_wipe_everything)',
|
||||||
|
|||||||
@@ -12,12 +12,6 @@ install_nvm_and_node() {
|
|||||||
nvm use $NODE_VERSION
|
nvm use $NODE_VERSION
|
||||||
}
|
}
|
||||||
|
|
||||||
# Function to install pnpm
|
|
||||||
install_pnpm() {
|
|
||||||
echo "Installing pnpm..."
|
|
||||||
npm install -g pnpm
|
|
||||||
}
|
|
||||||
|
|
||||||
# Function to install Docker
|
# Function to install Docker
|
||||||
install_docker() {
|
install_docker() {
|
||||||
echo "Installing Docker..."
|
echo "Installing Docker..."
|
||||||
@@ -87,10 +81,5 @@ else
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
||||||
# Check if pnpm is installed
|
npm install
|
||||||
if ! command -v pnpm >/dev/null 2>&1; then
|
npm run quiz
|
||||||
install_pnpm
|
|
||||||
fi
|
|
||||||
|
|
||||||
pnpm --ignore-workspace install
|
|
||||||
./node_modules/.bin/jiti quiz.ts
|
|
||||||
11
self-hosting/update
Executable file
11
self-hosting/update
Executable file
@@ -0,0 +1,11 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
git pull
|
||||||
|
|
||||||
|
echo "Pulling latest docker images"
|
||||||
|
docker compose pull
|
||||||
|
|
||||||
|
echo "Restarting services"
|
||||||
|
docker compose restart
|
||||||
|
|
||||||
|
echo "Done"
|
||||||
Reference in New Issue
Block a user