feat(root): added migrations and optimized profile table

This commit is contained in:
Carl-Gerhard Lindesvärd
2024-09-10 10:08:26 +02:00
committed by Carl-Gerhard Lindesvärd
parent 2258fed24a
commit b44f1958a2
22 changed files with 280 additions and 169 deletions

View File

@@ -1,3 +1,23 @@
# Ready for docker-compose # CLERK
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY=CHANGE_ME
CLERK_SECRET_KEY=CHANGE_ME
CLERK_SIGNING_SECRET="CHANGE_ME"
# STORAGE
REDIS_URL="redis://127.0.0.1:6379" REDIS_URL="redis://127.0.0.1:6379"
DATABASE_URL="postgres://username:password@127.0.0.1:5435/postgres?sslmode=disable" DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres?schema=public"
DATABASE_URL_DIRECT="$DATABASE_URL"
CLICKHOUSE_URL="http://localhost:8123/openpanel"
# REST
BATCH_SIZE="5000"
BATCH_INTERVAL="10000"
CONCURRENCY="10"
NEXT_PUBLIC_DASHBOARD_URL="http://localhost:3000"
NEXT_PUBLIC_API_URL="http://localhost:3333"
WORKER_PORT=9999
API_PORT=3333
NEXT_PUBLIC_CLERK_SIGN_IN_URL="/login"
NEXT_PUBLIC_CLERK_SIGN_UP_URL="/register"
NEXT_PUBLIC_CLERK_AFTER_SIGN_IN_URL="/"
NEXT_PUBLIC_CLERK_AFTER_SIGN_UP_URL="/"

2
.gitignore vendored
View File

@@ -4,7 +4,7 @@ packages/sdk/test.ts
dump.sql dump.sql
dump-* dump-*
.sql .sql
/clickhouse tmp
# Logs # Logs

View File

@@ -75,3 +75,12 @@ You can find the how to [here](https://docs.openpanel.dev/docs/self-hosting)
**Give us a star if you like it!** **Give us a star if you like it!**
[![Star History Chart](https://api.star-history.com/svg?repos=Openpanel-dev/openpanel&type=Date)](https://star-history.com/#Openpanel-dev/openpanel&Date) [![Star History Chart](https://api.star-history.com/svg?repos=Openpanel-dev/openpanel&type=Date)](https://star-history.com/#Openpanel-dev/openpanel&Date)
## Development
```bash
pnpm docker
pnpm codegen
pnpm migrate:deploy # once to setup the db
pnpm dev
```

View File

@@ -1,64 +1,64 @@
# Openpanel Trademark Guidelines # OpenPanel Trademark Guidelines
## Overview ## Overview
Welcome to Openpanel's Trademark Guidelines. These guidelines are designed to help you understand how to use and refer to the Openpanel brand and trademarks properly. By following these guidelines, you contribute to maintaining the integrity of the Openpanel brand. Welcome to OpenPanel's Trademark Guidelines. These guidelines are designed to help you understand how to use and refer to the OpenPanel brand and trademarks properly. By following these guidelines, you contribute to maintaining the integrity of the OpenPanel brand.
## Trademark Usage ## Trademark Usage
### Openpanel Logo ### OpenPanel Logo
The Openpanel logo is a key element of our brand identity. To ensure consistency and visibility, please adhere to the following guidelines: The OpenPanel logo is a key element of our brand identity. To ensure consistency and visibility, please adhere to the following guidelines:
- **Do not modify or alter the Openpanel logo.** - **Do not modify or alter the OpenPanel logo.**
- **Maintain proper spacing around the logo to ensure clarity and legibility.** - **Maintain proper spacing around the logo to ensure clarity and legibility.**
- **Use the official Openpanel logo assets provided on our official website.** - **Use the official OpenPanel logo assets provided on our official website.**
### Openpanel Name ### OpenPanel Name
When referring to Openpanel in text, please follow these guidelines: When referring to OpenPanel in text, please follow these guidelines:
- **Use the full, unaltered "Openpanel" name when mentioning our product.** - **Use the full, unaltered "OpenPanel" name when mentioning our product.**
- **Capitalize the "O" in Openpanel.** - **Capitalize the "O" in OpenPanel.**
- **Avoid using Openpanel in a way that could be misleading or imply endorsement.** - **Avoid using OpenPanel in a way that could be misleading or imply endorsement.**
## Domain Names ## Domain Names
To avoid confusion and maintain the clarity of the Openpanel brand, please refrain from using domain names that may be misleading or suggest an official affiliation with Openpanel. To avoid confusion and maintain the clarity of the OpenPanel brand, please refrain from using domain names that may be misleading or suggest an official affiliation with OpenPanel.
## Open Source Projects ## Open Source Projects
If you are developing an open-source project related to Openpanel, feel free to use and reference our trademarks as long as it is clear that your project is not officially endorsed or affiliated with Openpanel. If you are developing an open-source project related to OpenPanel, feel free to use and reference our trademarks as long as it is clear that your project is not officially endorsed or affiliated with OpenPanel.
## Contact Us ## Contact Us
If you have any questions or need further clarification on the use of Openpanel trademarks, please contact us at [hello@openpanel.dev]. If you have any questions or need further clarification on the use of OpenPanel trademarks, please contact us at [hello@openpanel.dev].
--- ---
## Acceptable Uses ## Acceptable Uses
You are permitted to use the Openpanel name in the following situations, provided it is done truthfully and accurately: You are permitted to use the OpenPanel name in the following situations, provided it is done truthfully and accurately:
- To refer to Openpanel and its products and services in news articles and other content without alteration. - To refer to OpenPanel and its products and services in news articles and other content without alteration.
- To discuss Openpanel and its products in a fair and honest manner that does not imply sponsorship, endorsement, or affiliation with Openpanel. - To discuss OpenPanel and its products in a fair and honest manner that does not imply sponsorship, endorsement, or affiliation with OpenPanel.
- To refer to and/or link to the products and services hosted on Openpanels servers and website. - To refer to and/or link to the products and services hosted on Openpanels servers and website.
- To indicate if your product, service, or solution integrates, is interoperable, or compatible with Openpanel, as long as it does not create confusion about the origin of your offering. - To indicate if your product, service, or solution integrates, is interoperable, or compatible with OpenPanel, as long as it does not create confusion about the origin of your offering.
- You may use our word marks as part of a public subdomain solely for serving as the URL for your self-managed Openpanel instance (e.g., openpanel.companyname.com). - You may use our word marks as part of a public subdomain solely for serving as the URL for your self-managed OpenPanel instance (e.g., openpanel.companyname.com).
## Prohibited Uses ## Prohibited Uses
Unless you have explicit written permission from Openpanel or your use falls under the acceptable uses mentioned above, the use of Openpanel trademarks is strictly prohibited. Here are examples of prohibited uses that may be considered for permission upon request: Unless you have explicit written permission from OpenPanel or your use falls under the acceptable uses mentioned above, the use of OpenPanel trademarks is strictly prohibited. Here are examples of prohibited uses that may be considered for permission upon request:
- Use of Openpanel trademarks in connection with a public website offering Openpanel software for installation and use on a server (instead of directing users to the official Openpanel site). - Use of OpenPanel trademarks in connection with a public website offering OpenPanel software for installation and use on a server (instead of directing users to the official OpenPanel site).
- Use of Openpanel trademarks in connection with versions of Openpanel products made publicly available or offered in the cloud by a managed service provider, resale, or other commercial basis. - Use of OpenPanel trademarks in connection with versions of OpenPanel products made publicly available or offered in the cloud by a managed service provider, resale, or other commercial basis.
- Use of Openpanel trademarks in connection with bundling Openpanel products with other software. - Use of OpenPanel trademarks in connection with bundling OpenPanel products with other software.
In these cases: In these cases:
- Adherence to the terms of the open-source license for Openpanel software products and code is mandatory. - Adherence to the terms of the open-source license for OpenPanel software products and code is mandatory.
- Removal of all Openpanel logos is required, with the adoption of your own branding to clearly signify no affiliation with or endorsement by Openpanel. - Removal of all OpenPanel logos is required, with the adoption of your own branding to clearly signify no affiliation with or endorsement by OpenPanel.
- Avoidance of using any Openpanel trademark in connection with the user-facing name, branding, or marketing materials of your project. - Avoidance of using any OpenPanel trademark in connection with the user-facing name, branding, or marketing materials of your project.
- Usage of word marks, but not logos, in truthful statements describing the relationship between your software and Openpanel is allowed. For instance, "this software is derived from the source code of the Openpanel software," along with a disclaimer that your project is not officially associated with Openpanel or its products. - Usage of word marks, but not logos, in truthful statements describing the relationship between your software and OpenPanel is allowed. For instance, "this software is derived from the source code of the OpenPanel software," along with a disclaimer that your project is not officially associated with OpenPanel or its products.
Openpanel reserves the right, at its sole discretion, to (i) terminate, revoke, modify, or change permission to use the trademarks at any time and; (ii) object to any use or misuse of the trademarks globally. Any changes to these guidelines are effective immediately upon posting, and your continued use of the trademarks following revised guidelines signifies your acceptance of such revisions. OpenPanel reserves the right, at its sole discretion, to (i) terminate, revoke, modify, or change permission to use the trademarks at any time and; (ii) object to any use or misuse of the trademarks globally. Any changes to these guidelines are effective immediately upon posting, and your continued use of the trademarks following revised guidelines signifies your acceptance of such revisions.

View File

@@ -7,9 +7,15 @@ apt-get install -y --no-install-recommends \
ca-certificates \ ca-certificates \
openssl \ openssl \
libssl3 \ libssl3 \
curl \
netcat-openbsd \
&& apt-get clean && \ && apt-get clean && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
RUN curl -fsSL \
https://raw.githubusercontent.com/pressly/goose/master/install.sh |\
sh
ARG DATABASE_URL ARG DATABASE_URL
ENV DATABASE_URL=$DATABASE_URL ENV DATABASE_URL=$DATABASE_URL
ENV PNPM_HOME="/pnpm" ENV PNPM_HOME="/pnpm"

44
docker-compose.yml Normal file
View File

@@ -0,0 +1,44 @@
version: '3'
services:
op-db:
image: postgres:14-alpine
restart: always
volumes:
- ./tmp/op-db-data:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
op-kv:
image: redis:7.2.5-alpine
restart: always
volumes:
- ./tmp/op-kv-data:/data
command: ['redis-server', '--maxmemory-policy', 'noeviction']
ports:
- 6379:6379
op-geo:
image: observabilitystack/geoip-api:latest
restart: always
ports:
- 8080:8080
op-ch:
image: clickhouse/clickhouse-server:24.3.2-alpine
restart: always
volumes:
- ./tmp/op-ch-data:/var/lib/clickhouse
- ./tmp/op-ch-logs:/var/log/clickhouse-server
- ./clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/op-config.xml:ro
- ./clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/op-user-config.xml:ro
ulimits:
nofile:
soft: 262144
hard: 262144
ports:
- 9000:9000
- 8123:8123

View File

@@ -7,7 +7,10 @@
"packageManager": "pnpm@8.7.6", "packageManager": "pnpm@8.7.6",
"module": "index.ts", "module": "index.ts",
"scripts": { "scripts": {
"up": "docker compose up",
"down": "docker compose down",
"db:codegen": "pnpm -r --filter db run codegen", "db:codegen": "pnpm -r --filter db run codegen",
"codegen": "pnpm db:codegen",
"migrate": "pnpm -r --filter db run migrate", "migrate": "pnpm -r --filter db run migrate",
"migrate:deploy": "pnpm -r --filter db run migrate:deploy", "migrate:deploy": "pnpm -r --filter db run migrate:deploy",
"dev": "pnpm -r --parallel testing", "dev": "pnpm -r --parallel testing",

View File

@@ -1,6 +1,6 @@
CREATE DATABASE IF NOT EXISTS openpanel; -- +goose Up
-- +goose StatementBegin
CREATE TABLE IF NOT EXISTS openpanel.self_hosting CREATE TABLE IF NOT EXISTS self_hosting
( (
created_at Date, created_at Date,
domain String, domain String,
@@ -9,9 +9,10 @@ CREATE TABLE IF NOT EXISTS openpanel.self_hosting
ENGINE = MergeTree() ENGINE = MergeTree()
ORDER BY (domain, created_at) ORDER BY (domain, created_at)
PARTITION BY toYYYYMM(created_at); PARTITION BY toYYYYMM(created_at);
-- +goose StatementEnd
-- +goose StatementBegin
CREATE TABLE IF NOT EXISTS openpanel.events_v2 ( CREATE TABLE IF NOT EXISTS events_v2 (
`id` UUID DEFAULT generateUUIDv4(), `id` UUID DEFAULT generateUUIDv4(),
`name` String, `name` String,
`sdk_name` String, `sdk_name` String,
@@ -48,20 +49,25 @@ CREATE TABLE IF NOT EXISTS openpanel.events_v2 (
) ENGINE = MergeTree PARTITION BY toYYYYMM(created_at) ) ENGINE = MergeTree PARTITION BY toYYYYMM(created_at)
ORDER BY ORDER BY
(project_id, toDate(created_at), profile_id, name) SETTINGS index_granularity = 8192; (project_id, toDate(created_at), profile_id, name) SETTINGS index_granularity = 8192;
-- +goose StatementEnd
CREATE TABLE IF NOT EXISTS openpanel.events_bots ( -- +goose StatementBegin
CREATE TABLE IF NOT EXISTS events_bots (
`id` UUID DEFAULT generateUUIDv4(), `id` UUID DEFAULT generateUUIDv4(),
`project_id` String, `project_id` String,
`name` String, `name` String,
`type` String, `type` String,
`path` String, `path` String,
`created_at` DateTime64(3), `created_at` DateTime64(3)
) ENGINE MergeTree ) ENGINE MergeTree
ORDER BY ORDER BY
(project_id, created_at) SETTINGS index_granularity = 8192; (project_id, created_at) SETTINGS index_granularity = 8192;
-- +goose StatementEnd
CREATE TABLE IF NOT EXISTS openpanel.profiles ( -- +goose StatementBegin
CREATE TABLE IF NOT EXISTS profiles (
`id` String, `id` String,
`is_external` Bool,
`first_name` String, `first_name` String,
`last_name` String, `last_name` String,
`email` String, `email` String,
@@ -72,8 +78,10 @@ CREATE TABLE IF NOT EXISTS openpanel.profiles (
) ENGINE = ReplacingMergeTree(created_at) ) ENGINE = ReplacingMergeTree(created_at)
ORDER BY ORDER BY
(id) SETTINGS index_granularity = 8192; (id) SETTINGS index_granularity = 8192;
-- +goose StatementEnd
CREATE TABLE IF NOT EXISTS openpanel.profile_aliases ( -- +goose StatementBegin
CREATE TABLE IF NOT EXISTS profile_aliases (
`project_id` String, `project_id` String,
`profile_id` String, `profile_id` String,
`alias` String, `alias` String,
@@ -81,8 +89,9 @@ CREATE TABLE IF NOT EXISTS openpanel.profile_aliases (
) ENGINE = MergeTree ) ENGINE = MergeTree
ORDER BY ORDER BY
(project_id, profile_id, alias, created_at) SETTINGS index_granularity = 8192; (project_id, profile_id, alias, created_at) SETTINGS index_granularity = 8192;
-- +goose StatementEnd
--- Materialized views (DAU) -- +goose StatementBegin
CREATE MATERIALIZED VIEW IF NOT EXISTS dau_mv ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMMDD(date) CREATE MATERIALIZED VIEW IF NOT EXISTS dau_mv ENGINE = AggregatingMergeTree() PARTITION BY toYYYYMMDD(date)
ORDER BY ORDER BY
(project_id, date) POPULATE AS (project_id, date) POPULATE AS
@@ -94,4 +103,10 @@ FROM
events_v2 events_v2
GROUP BY GROUP BY
date, date,
project_id; project_id;
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
SELECT 'down SQL query';
-- +goose StatementEnd

View File

@@ -0,0 +1,44 @@
-- +goose Up
-- +goose StatementBegin
CREATE TABLE profiles_tmp
(
`id` String,
`is_external` Bool,
`first_name` String,
`last_name` String,
`email` String,
`avatar` String,
`properties` Map(String, String),
`project_id` String,
`created_at` DateTime,
INDEX idx_first_name first_name TYPE bloom_filter GRANULARITY 1,
INDEX idx_last_name last_name TYPE bloom_filter GRANULARITY 1,
INDEX idx_email email TYPE bloom_filter GRANULARITY 1
)
ENGINE = ReplacingMergeTree(created_at)
PARTITION BY toYYYYMM(created_at)
ORDER BY (project_id, created_at, id)
SETTINGS index_granularity = 8192;
-- +goose StatementEnd
-- +goose StatementBegin
INSERT INTO profiles_tmp SELECT
id,
is_external,
first_name,
last_name,
email,
avatar,
properties,
project_id,
created_at
FROM profiles;
-- +goose StatementEnd
-- +goose StatementBegin
OPTIMIZE TABLE profiles_tmp FINAL;
-- +goose StatementEnd
-- +goose StatementBegin
RENAME TABLE profiles TO profiles_old, profiles_tmp TO profiles;
-- +goose StatementEnd
-- +goose StatementBegin
DROP TABLE profiles_old;
-- +goose StatementEnd

11
packages/db/migrations/goose Executable file
View File

@@ -0,0 +1,11 @@
#!/bin/bash
if [ -z "$CLICKHOUSE_URL" ]; then
echo "CLICKHOUSE_URL is not set"
exit 1
fi
export GOOSE_DBSTRING=$CLICKHOUSE_URL
goose clickhouse --dir ./migrations $@

View File

@@ -3,9 +3,12 @@
"version": "0.0.1", "version": "0.0.1",
"main": "index.ts", "main": "index.ts",
"scripts": { "scripts": {
"goose": "pnpm with-env ./migrations/goose",
"codegen": "pnpm with-env prisma generate", "codegen": "pnpm with-env prisma generate",
"migrate": "pnpm with-env prisma migrate dev", "migrate": "pnpm with-env prisma migrate dev",
"migrate:deploy": "pnpm with-env prisma migrate deploy", "migrate:deploy:db": "pnpm with-env prisma migrate deploy",
"migrate:deploy:ch": "pnpm goose up",
"migrate:deploy": "pnpm migrate:deploy:db && pnpm migrate:deploy:ch",
"lint": "eslint .", "lint": "eslint .",
"format": "prettier --check \"**/*.{mjs,ts,md,json}\"", "format": "prettier --check \"**/*.{mjs,ts,md,json}\"",
"typecheck": "tsc --noEmit", "typecheck": "tsc --noEmit",
@@ -44,4 +47,4 @@
] ]
}, },
"prettier": "@openpanel/prettier-config" "prettier": "@openpanel/prettier-config"
} }

View File

@@ -9,13 +9,12 @@ export const TABLE_NAMES = {
profiles: 'profiles', profiles: 'profiles',
alias: 'profile_aliases', alias: 'profile_aliases',
self_hosting: 'self_hosting', self_hosting: 'self_hosting',
events_bots: 'events_bots',
dau_mv: 'dau_mv',
}; };
export const originalCh = createClient({ export const originalCh = createClient({
url: process.env.CLICKHOUSE_URL, url: process.env.CLICKHOUSE_URL,
username: process.env.CLICKHOUSE_USER,
password: process.env.CLICKHOUSE_PASSWORD,
database: process.env.CLICKHOUSE_DB,
max_open_connections: 30, max_open_connections: 30,
request_timeout: 30000, request_timeout: 30000,
keep_alive: { keep_alive: {

View File

@@ -225,23 +225,24 @@ export async function getEvents(
options: GetEventsOptions = {} options: GetEventsOptions = {}
): Promise<IServiceEvent[]> { ): Promise<IServiceEvent[]> {
const events = await chQuery<IClickhouseEvent>(sql); const events = await chQuery<IClickhouseEvent>(sql);
if (options.profile) { const projectId = events[0]?.project_id;
if (options.profile && projectId) {
const ids = events.map((e) => e.profile_id); const ids = events.map((e) => e.profile_id);
const profiles = await getProfiles(ids); const profiles = await getProfiles(ids, projectId);
for (const event of events) { for (const event of events) {
event.profile = profiles.find((p) => p.id === event.profile_id); event.profile = profiles.find((p) => p.id === event.profile_id);
} }
} }
if (options.meta) { if (options.meta && projectId) {
const names = uniq(events.map((e) => e.name)); const names = uniq(events.map((e) => e.name));
const metas = await db.eventMeta.findMany({ const metas = await db.eventMeta.findMany({
where: { where: {
name: { name: {
in: names, in: names,
}, },
projectId: events[0]?.project_id, projectId,
}, },
select: options.meta === true ? undefined : options.meta, select: options.meta === true ? undefined : options.meta,
}); });

View File

@@ -69,7 +69,7 @@ interface GetProfileListOptions {
search?: string; search?: string;
} }
export async function getProfiles(ids: string[]) { export async function getProfiles(ids: string[], projectId: string) {
const filteredIds = uniq(ids.filter((id) => id !== '')); const filteredIds = uniq(ids.filter((id) => id !== ''));
if (filteredIds.length === 0) { if (filteredIds.length === 0) {
@@ -78,8 +78,10 @@ export async function getProfiles(ids: string[]) {
const data = await chQuery<IClickhouseProfile>( const data = await chQuery<IClickhouseProfile>(
`SELECT id, first_name, last_name, email, avatar, is_external `SELECT id, first_name, last_name, email, avatar, is_external
FROM profiles FINAL FROM ${TABLE_NAMES.profiles} FINAL
WHERE id IN (${filteredIds.map((id) => escape(id)).join(',')}) WHERE
project_id = ${escape(projectId)} AND
id IN (${filteredIds.map((id) => escape(id)).join(',')})
` `
); );
@@ -94,18 +96,14 @@ export async function getProfileList({
search, search,
}: GetProfileListOptions) { }: GetProfileListOptions) {
const { sb, getSql } = createSqlBuilder(); const { sb, getSql } = createSqlBuilder();
sb.from = 'profiles FINAL'; sb.from = `${TABLE_NAMES.profiles} FINAL`;
sb.select.all = '*'; sb.select.all = '*';
sb.where.project_id = `project_id = ${escape(projectId)}`; sb.where.project_id = `project_id = ${escape(projectId)}`;
sb.limit = take; sb.limit = take;
sb.offset = Math.max(0, (cursor ?? 0) * take); sb.offset = Math.max(0, (cursor ?? 0) * take);
sb.orderBy.created_at = 'created_at DESC'; sb.orderBy.created_at = 'created_at DESC';
if (search) { if (search) {
if (search.includes('@')) { sb.where.search = `(email ILIKE '%${search}%' OR first_name ILIKE '%${search}%' OR last_name ILIKE '%${search}%')`;
sb.where.email = `email ILIKE '%${search}%'`;
} else {
sb.where.first_name = `first_name ILIKE '%${search}%' OR last_name ILIKE '%${search}%'`;
}
} }
const data = await chQuery<IClickhouseProfile>(getSql()); const data = await chQuery<IClickhouseProfile>(getSql());
return data.map(transformProfile); return data.map(transformProfile);

View File

@@ -121,7 +121,7 @@ export function getRollingActiveUsers({
FROM FROM
( (
SELECT * SELECT *
FROM dau_mv FROM ${TABLE_NAMES.dau_mv}
WHERE project_id = ${escape(projectId)} WHERE project_id = ${escape(projectId)}
) )
ARRAY JOIN range(${days}) AS n ARRAY JOIN range(${days}) AS n

View File

@@ -430,7 +430,10 @@ export async function getFunnelStep({
id: string; id: string;
}>(profileIdsQuery); }>(profileIdsQuery);
return getProfiles(res.map((r) => r.id)); return getProfiles(
res.map((r) => r.id),
projectId
);
} }
export async function getChartSerie(payload: IGetChartDataInput) { export async function getChartSerie(payload: IGetChartDataInput) {

View File

@@ -151,12 +151,12 @@ export const eventRouter = createTRPCRouter({
path: string; path: string;
created_at: string; created_at: string;
}>( }>(
`SELECT * FROM events_bots WHERE project_id = ${escape(projectId)} ORDER BY created_at DESC LIMIT ${limit} OFFSET ${(cursor ?? 0) * limit}` `SELECT * FROM ${TABLE_NAMES.events_bots} WHERE project_id = ${escape(projectId)} ORDER BY created_at DESC LIMIT ${limit} OFFSET ${(cursor ?? 0) * limit}`
), ),
chQuery<{ chQuery<{
count: number; count: number;
}>( }>(
`SELECT count(*) as count FROM events_bots WHERE project_id = ${escape(projectId)}` `SELECT count(*) as count FROM ${TABLE_NAMES.events_bots} WHERE project_id = ${escape(projectId)}`
), ),
]); ]);

View File

@@ -17,7 +17,7 @@ export const profileRouter = createTRPCRouter({
.input(z.object({ projectId: z.string() })) .input(z.object({ projectId: z.string() }))
.query(async ({ input: { projectId } }) => { .query(async ({ input: { projectId } }) => {
const events = await chQuery<{ keys: string[] }>( const events = await chQuery<{ keys: string[] }>(
`SELECT distinct mapKeys(properties) as keys from profiles where project_id = ${escape(projectId)};` `SELECT distinct mapKeys(properties) as keys from ${TABLE_NAMES.profiles} where project_id = ${escape(projectId)};`
); );
const properties = events const properties = events
@@ -61,7 +61,10 @@ export const profileRouter = createTRPCRouter({
const res = await chQuery<{ profile_id: string; count: number }>( const res = await chQuery<{ profile_id: string; count: number }>(
`SELECT profile_id, count(*) as count from ${TABLE_NAMES.events} where profile_id != '' and project_id = ${escape(projectId)} group by profile_id order by count() DESC LIMIT ${take} ${cursor ? `OFFSET ${cursor * take}` : ''}` `SELECT profile_id, count(*) as count from ${TABLE_NAMES.events} where profile_id != '' and project_id = ${escape(projectId)} group by profile_id order by count() DESC LIMIT ${take} ${cursor ? `OFFSET ${cursor * take}` : ''}`
); );
const profiles = await getProfiles(res.map((r) => r.profile_id)); const profiles = await getProfiles(
res.map((r) => r.profile_id),
projectId
);
return ( return (
res res
.map((item) => { .map((item) => {
@@ -84,7 +87,7 @@ export const profileRouter = createTRPCRouter({
) )
.query(async ({ input: { property, projectId } }) => { .query(async ({ input: { property, projectId } }) => {
const { sb, getSql } = createSqlBuilder(); const { sb, getSql } = createSqlBuilder();
sb.from = 'profiles'; sb.from = TABLE_NAMES.profiles;
sb.where.project_id = `project_id = ${escape(projectId)}`; sb.where.project_id = `project_id = ${escape(projectId)}`;
if (property.startsWith('properties.')) { if (property.startsWith('properties.')) {
sb.select.values = `distinct arrayMap(x -> trim(x), mapValues(mapExtractKeyLike(properties, ${escape( sb.select.values = `distinct arrayMap(x -> trim(x), mapValues(mapExtractKeyLike(properties, ${escape(

View File

@@ -10,9 +10,6 @@ BATCH_INTERVAL="10000"
# Will be replaced with the setup script # Will be replaced with the setup script
REDIS_URL="$REDIS_URL" REDIS_URL="$REDIS_URL"
CLICKHOUSE_URL="$CLICKHOUSE_URL" CLICKHOUSE_URL="$CLICKHOUSE_URL"
CLICKHOUSE_DB="$CLICKHOUSE_DB"
CLICKHOUSE_USER="$CLICKHOUSE_USER"
CLICKHOUSE_PASSWORD="$CLICKHOUSE_PASSWORD"
DATABASE_URL="$DATABASE_URL" DATABASE_URL="$DATABASE_URL"
DATABASE_URL_DIRECT="$DATABASE_URL_DIRECT" DATABASE_URL_DIRECT="$DATABASE_URL_DIRECT"
NEXT_PUBLIC_DASHBOARD_URL="$NEXT_PUBLIC_DASHBOARD_URL" NEXT_PUBLIC_DASHBOARD_URL="$NEXT_PUBLIC_DASHBOARD_URL"

View File

@@ -0,0 +1,6 @@
#!/bin/bash
set -e
clickhouse client -n <<-EOSQL
CREATE DATABASE IF NOT EXISTS openpanel;
EOSQL

View File

@@ -20,50 +20,41 @@ services:
restart: always restart: always
volumes: volumes:
- op-db-data:/var/lib/postgresql/data - op-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD
healthcheck: healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres'] test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 10s interval: 10s
timeout: 5s timeout: 5s
retries: 5 retries: 5
ports: environment:
- 5431:5432 - POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
# Uncomment to expose ports
# ports:
# - 5432:5432
op-kv: op-kv:
image: redis:7.2.5-alpine image: redis:7.2.5-alpine
restart: always restart: always
volumes: volumes:
- op-kv-data:/data - op-kv-data:/data
command: command: ['redis-server', '--maxmemory-policy', 'noeviction']
[ # Uncomment to expose ports
'redis-server', # ports:
'--requirepass', # - 6379:6379
'${REDIS_PASSWORD}',
'--maxmemory-policy',
'noeviction',
]
ports:
- 6378:6379
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
op-geo: op-geo:
image: observabilitystack/geoip-api:latest image: observabilitystack/geoip-api:latest
restart: always restart: always
op-ch: op-ch:
image: clickhouse/clickhouse-server:23.3.7.5-alpine image: clickhouse/clickhouse-server:24.3.2-alpine
restart: always restart: always
volumes: volumes:
- op-ch-data:/var/lib/clickhouse - op-ch-data:/var/lib/clickhouse
- op-ch-logs:/var/log/clickhouse-server - op-ch-logs:/var/log/clickhouse-server
- ./clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/op-config.xml:ro - ./clickhouse/clickhouse-config.xml:/etc/clickhouse-server/config.d/op-config.xml:ro
- ./clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/op-user-config.xml:ro - ./clickhouse/clickhouse-user-config.xml:/etc/clickhouse-server/users.d/op-user-config.xml:ro
environment: - ./clickhouse/init-db.sh:/docker-entrypoint-initdb.d/init-db.sh:ro
- CLICKHOUSE_DB
- CLICKHOUSE_USER
- CLICKHOUSE_PASSWORD
healthcheck: healthcheck:
test: ['CMD-SHELL', 'clickhouse-client --query "SELECT 1"'] test: ['CMD-SHELL', 'clickhouse-client --query "SELECT 1"']
interval: 10s interval: 10s
@@ -73,37 +64,34 @@ services:
nofile: nofile:
soft: 262144 soft: 262144
hard: 262144 hard: 262144
ports: # Uncomment to expose ports
- 8999:9000 # ports:
- 8122:8123 # - 9000:9000
# - 8123:8123
op-ch-migrator:
image: clickhouse/clickhouse-server:23.3.7.5-alpine
depends_on:
- op-ch
volumes:
- ../packages/db/clickhouse_init.sql:/migrations/clickhouse_init.sql
environment:
- CLICKHOUSE_DB
- CLICKHOUSE_USER
- CLICKHOUSE_PASSWORD
entrypoint: /bin/sh -c
command: >
"
echo 'Waiting for ClickHouse to start...';
while ! clickhouse-client --host op-ch --user=$CLICKHOUSE_USER --password=$CLICKHOUSE_PASSWORD --query 'SELECT 1;' 2>/dev/null; do
echo 'ClickHouse is unavailable - sleeping 1s...';
sleep 1;
done;
echo 'ClickHouse started. Running migrations...';
clickhouse-client --host op-ch --database=$CLICKHOUSE_DB --user=$CLICKHOUSE_USER --password=$CLICKHOUSE_PASSWORD --queries-file /migrations/clickhouse_init.sql;
"
op-api: op-api:
image: lindesvard/openpanel-api:latest image: lindesvard/openpanel-api:latest
restart: always restart: always
command: sh -c "sleep 10 && pnpm -r run migrate:deploy && pnpm start" command: >
sh -c "
echo 'Waiting for PostgreSQL to be ready...'
while ! nc -z op-db 5432; do
sleep 1
done
echo 'PostgreSQL is ready'
# Add wait for ClickHouse
echo 'Waiting for ClickHouse to be ready...'
while ! nc -z op-ch 8123; do
sleep 1
done
echo 'ClickHouse is ready'
echo 'Running migrations...'
pnpm -r run migrate:deploy
pnpm start
"
depends_on: depends_on:
- op-db - op-db
- op-ch - op-ch
@@ -116,9 +104,7 @@ services:
image: lindesvard/openpanel-dashboard:latest image: lindesvard/openpanel-dashboard:latest
restart: always restart: always
depends_on: depends_on:
- op-db - op-api
- op-ch
- op-kv
env_file: env_file:
- .env - .env
@@ -126,9 +112,7 @@ services:
image: lindesvard/openpanel-worker:latest image: lindesvard/openpanel-worker:latest
restart: always restart: always
depends_on: depends_on:
- op-db - op-api
- op-ch
- op-kv
env_file: env_file:
- .env - .env
deploy: deploy:

View File

@@ -112,12 +112,7 @@ function removeServiceFromDockerCompose(serviceName: string) {
} }
function writeEnvFile(envs: { function writeEnvFile(envs: {
POSTGRES_PASSWORD: string | undefined;
REDIS_PASSWORD: string | undefined;
CLICKHOUSE_URL: string; CLICKHOUSE_URL: string;
CLICKHOUSE_DB: string;
CLICKHOUSE_USER: string;
CLICKHOUSE_PASSWORD: string;
REDIS_URL: string; REDIS_URL: string;
DATABASE_URL: string; DATABASE_URL: string;
DOMAIN_NAME: string; DOMAIN_NAME: string;
@@ -131,9 +126,6 @@ function writeEnvFile(envs: {
let newEnvFile = envTemplate let newEnvFile = envTemplate
.replace('$CLICKHOUSE_URL', envs.CLICKHOUSE_URL) .replace('$CLICKHOUSE_URL', envs.CLICKHOUSE_URL)
.replace('$CLICKHOUSE_DB', envs.CLICKHOUSE_DB)
.replace('$CLICKHOUSE_USER', envs.CLICKHOUSE_USER)
.replace('$CLICKHOUSE_PASSWORD', envs.CLICKHOUSE_PASSWORD)
.replace('$REDIS_URL', envs.REDIS_URL) .replace('$REDIS_URL', envs.REDIS_URL)
.replace('$DATABASE_URL', envs.DATABASE_URL) .replace('$DATABASE_URL', envs.DATABASE_URL)
.replace('$DATABASE_URL_DIRECT', envs.DATABASE_URL) .replace('$DATABASE_URL_DIRECT', envs.DATABASE_URL)
@@ -149,10 +141,6 @@ function writeEnvFile(envs: {
.replace('$CLERK_SECRET_KEY', envs.CLERK_SECRET_KEY) .replace('$CLERK_SECRET_KEY', envs.CLERK_SECRET_KEY)
.replace('$CLERK_SIGNING_SECRET', envs.CLERK_SIGNING_SECRET); .replace('$CLERK_SIGNING_SECRET', envs.CLERK_SIGNING_SECRET);
if (envs.POSTGRES_PASSWORD) {
newEnvFile += `\nPOSTGRES_PASSWORD=${envs.POSTGRES_PASSWORD}`;
}
fs.writeFileSync( fs.writeFileSync(
envPath, envPath,
newEnvFile newEnvFile
@@ -234,26 +222,9 @@ async function initiateOnboarding() {
{ {
type: 'input', type: 'input',
name: 'CLICKHOUSE_URL', name: 'CLICKHOUSE_URL',
message: 'Enter your ClickHouse URL:', message:
default: process.env.DEBUG ? 'http://clickhouse:8123' : undefined, 'Enter your ClickHouse URL (format: http://user:pw@host:port/db):',
}, default: process.env.DEBUG ? 'http://op-ch:8123/openpanel' : undefined,
{
type: 'input',
name: 'CLICKHOUSE_DB',
message: 'Enter your ClickHouse DB name:',
default: process.env.DEBUG ? 'db_openpanel' : undefined,
},
{
type: 'input',
name: 'CLICKHOUSE_USER',
message: 'Enter your ClickHouse user name:',
default: process.env.DEBUG ? 'user_openpanel' : undefined,
},
{
type: 'input',
name: 'CLICKHOUSE_PASSWORD',
message: 'Enter your ClickHouse password:',
default: process.env.DEBUG ? 'ch_password' : undefined,
}, },
]); ]);
@@ -268,8 +239,8 @@ async function initiateOnboarding() {
{ {
type: 'input', type: 'input',
name: 'REDIS_URL', name: 'REDIS_URL',
message: 'Enter your Redis URL:', message: 'Enter your Redis URL (format: redis://user:pw@host:port/db):',
default: process.env.DEBUG ? 'redis://redis:6379' : undefined, default: process.env.DEBUG ? 'redis://op-kv:6379' : undefined,
}, },
]); ]);
envs = { envs = {
@@ -283,9 +254,10 @@ async function initiateOnboarding() {
{ {
type: 'input', type: 'input',
name: 'DATABASE_URL', name: 'DATABASE_URL',
message: 'Enter your Database URL:', message:
'Enter your Database URL (format: postgresql://user:pw@host:port/db):',
default: process.env.DEBUG default: process.env.DEBUG
? 'postgresql://postgres:postgres@postgres:5432/postgres?schema=public' ? 'postgresql://postgres:postgres@op-db:5432/postgres?schema=public'
: undefined, : undefined,
}, },
]); ]);
@@ -399,20 +371,13 @@ async function initiateOnboarding() {
console.log(''); console.log('');
console.log('Creating .env file...\n'); console.log('Creating .env file...\n');
const POSTGRES_PASSWORD = generatePassword(20);
const REDIS_PASSWORD = generatePassword(20);
writeEnvFile({ writeEnvFile({
POSTGRES_PASSWORD: envs.DATABASE_URL ? undefined : POSTGRES_PASSWORD, CLICKHOUSE_URL: envs.CLICKHOUSE_URL || 'http://op-ch:8123/openpanel',
REDIS_PASSWORD: envs.REDIS_URL ? undefined : REDIS_PASSWORD,
CLICKHOUSE_URL: envs.CLICKHOUSE_URL || 'http://op-ch:8123',
CLICKHOUSE_DB: envs.CLICKHOUSE_DB || 'openpanel',
CLICKHOUSE_USER: envs.CLICKHOUSE_USER || 'openpanel',
CLICKHOUSE_PASSWORD: envs.CLICKHOUSE_PASSWORD || generatePassword(20),
REDIS_URL: envs.REDIS_URL || 'redis://op-kv:6379', REDIS_URL: envs.REDIS_URL || 'redis://op-kv:6379',
DATABASE_URL: DATABASE_URL:
envs.DATABASE_URL || envs.DATABASE_URL ||
`postgresql://postgres:${POSTGRES_PASSWORD}@op-db:5432/postgres?schema=public`, `postgresql://postgres:postgres@op-db:5432/postgres?schema=public`,
DOMAIN_NAME: domainNameResponse.domainName, DOMAIN_NAME: domainNameResponse.domainName,
NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY: NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY:
clerkResponse.NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY || '', clerkResponse.NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY || '',