feat: write objects to blob storage (#8557)
* feat: basic blobstore infrastructure for dev * refactor: (broken) attempt to put minio console behind nginx * feat: initialize blobstore with boto3 * fix: abandon attempt to proxy minio. Use docker compose instead. * feat: beginning of blob writes * feat: storage utilities * feat: test buckets * chore: black * chore: remove unused import * chore: avoid f string when not needed * fix: inform all settings files about blobstores * fix: declare types for some settings * ci: point to new target base * ci: adjust test workflow * fix: give the tests debug environment a blobstore * fix: "better" name declarations * ci: use devblobstore container * chore: identify places to write to blobstorage * chore: remove unreachable code * feat: store materials * feat: store statements * feat: store status changes * feat: store liaison attachments * feat: store agendas provided with Interim session requests * chore: capture TODOs * feat: store polls and chatlogs * chore: remove unneeded TODO * feat: store drafts on submit and post * fix: handle storage during doc expiration and resurrection * fix: mirror an unlink * chore: add/refine TODOs * feat: store slide submissions * fix: structure slide test correctly * fix: correct sense of existence check * feat: store some indexes * feat: BlobShadowFileSystemStorage * feat: shadow floorplans / host logos to the blob * chore: remove unused import * feat: strip path from blob shadow names * feat: shadow photos / thumbs * refactor: combine photo and photothumb blob kinds The photos / thumbs were already dropped in the same directory, so let's not add a distinction at this point. * style: whitespace * refactor: use kwargs consistently * chore: migrations * refactor: better deconstruct(); rebuild migrations * fix: use new class in mack patch * chore: add TODO * feat: store group index documents * chore: identify more TODO * feat: store reviews * fix: repair merge * chore: remove unnecessary TODO * feat: StoredObject metadata * fix: deburr some debugging code * fix: only set the deleted timestamp once * chore: correct typo * fix: get_or_create vs get and test * fix: avoid the questionable is_seekable helper * chore: capture future design consideration * chore: blob store cfg for k8s * chore: black * chore: copyright * ci: bucket name prefix option + run Black Adds/uses DATATRACKER_BLOB_STORE_BUCKET_PREFIX option. Other changes are just Black styling. * ci: fix typo in bucket name expression * chore: parameters in app-configure-blobstore Allows use with other blob stores. * ci: remove verify=False option * fix: don't return value from __init__ * feat: option to log timing of S3Storage calls * chore: units * fix: deleted->null when storing a file * style: Black * feat: log as JSON; refactor to share code; handle exceptions * ci: add ietf_log_blob_timing option for k8s * test: --no-manage-blobstore option for running tests * test: use blob store settings from env, if set * test: actually set a couple more storage opts * feat: offswitch (#8541) * feat: offswitch * fix: apply ENABLE_BLOBSTORAGE to BlobShadowFileSystemStorage behavior * chore: log timing of blob reads * chore: import Config from botocore.config * chore(deps): import boto3-stubs / botocore botocore is implicitly imported, but make it explicit since we refer to it directly * chore: drop type annotation that mypy loudly ignores * refactor: add storage methods via mixin Shares code between Document and DocHistory without putting it in the base DocumentInfo class, which lacks the name field. Also makes mypy happy. * feat: add timeout / retry limit to boto client * ci: let k8s config the timeouts via env * chore: repair merge resolution typo * chore: tweak settings imports * chore: simplify k8s/settings_local.py imports --------- Co-authored-by: Jennifer Richards <jennifer@staff.ietf.org>
This commit is contained in:
parent
e71272fd2f
commit
997239a2ea
|
@ -14,6 +14,10 @@ services:
|
|||
# - datatracker-vscode-ext:/root/.vscode-server/extensions
|
||||
# Runs app on the same network as the database container, allows "forwardPorts" in devcontainer.json function.
|
||||
network_mode: service:db
|
||||
blobstore:
|
||||
ports:
|
||||
- '9000'
|
||||
- '9001'
|
||||
|
||||
volumes:
|
||||
datatracker-vscode-ext:
|
||||
|
|
2
.github/workflows/tests.yml
vendored
2
.github/workflows/tests.yml
vendored
|
@ -28,6 +28,8 @@ jobs:
|
|||
services:
|
||||
db:
|
||||
image: ghcr.io/ietf-tools/datatracker-db:latest
|
||||
blobstore:
|
||||
image: ghcr.io/ietf-tools/datatracker-devblobstore:latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
|
17
README.md
17
README.md
|
@ -106,6 +106,23 @@ Nightly database dumps of the datatracker are available as Docker images: `ghcr.
|
|||
|
||||
> Note that to update the database in your dev environment to the latest version, you should run the `docker/cleandb` script.
|
||||
|
||||
### Blob storage for dev/test
|
||||
|
||||
The dev and test environments use [minio](https://github.com/minio/minio) to provide local blob storage. See the settings files for how the app container communicates with the blobstore container. If you need to work with minio directly from outside the containers (to interact with its api or console), use `docker compose` from the top level directory of your clone to expose it at an ephemeral port.
|
||||
|
||||
```
|
||||
$ docker compose port blobstore 9001
|
||||
0.0.0.0:<some ephemeral port>
|
||||
|
||||
$ curl -I http://localhost:<some ephemeral port>
|
||||
HTTP/1.1 200 OK
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
The minio container exposes the minio api at port 9000 and the minio console at port 9001
|
||||
|
||||
|
||||
### Frontend Development
|
||||
|
||||
#### Intro
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
# Copyright The IETF Trust 2007-2019, All Rights Reserved
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import STORAGES, MORE_STORAGE_NAMES, BLOBSTORAGE_CONNECT_TIMEOUT, BLOBSTORAGE_READ_TIMEOUT, BLOBSTORAGE_MAX_ATTEMPTS
|
||||
import botocore.config
|
||||
|
||||
ALLOWED_HOSTS = ['*']
|
||||
|
||||
|
@ -79,3 +81,22 @@ APP_API_TOKENS = {
|
|||
|
||||
# OIDC configuration
|
||||
SITE_URL = 'https://__HOSTNAME__'
|
||||
|
||||
for storagename in MORE_STORAGE_NAMES:
|
||||
STORAGES[storagename] = {
|
||||
"BACKEND": "ietf.doc.storage_backends.CustomS3Storage",
|
||||
"OPTIONS": dict(
|
||||
endpoint_url="http://blobstore:9000",
|
||||
access_key="minio_root",
|
||||
secret_key="minio_pass",
|
||||
security_token=None,
|
||||
client_config=botocore.config.Config(
|
||||
signature_version="s3v4",
|
||||
connect_timeout=BLOBSTORAGE_CONNECT_TIMEOUT,
|
||||
read_timeout=BLOBSTORAGE_READ_TIMEOUT,
|
||||
retries={"total_max_attempts": BLOBSTORAGE_MAX_ATTEMPTS},
|
||||
),
|
||||
verify=False,
|
||||
bucket_name=f"test-{storagename}",
|
||||
),
|
||||
}
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
# Copyright The IETF Trust 2007-2019, All Rights Reserved
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import STORAGES, MORE_STORAGE_NAMES, BLOBSTORAGE_CONNECT_TIMEOUT, BLOBSTORAGE_READ_TIMEOUT, BLOBSTORAGE_MAX_ATTEMPTS
|
||||
import botocore.config
|
||||
|
||||
ALLOWED_HOSTS = ['*']
|
||||
|
||||
|
@ -66,3 +68,22 @@ NOMCOM_PUBLIC_KEYS_DIR = 'data/nomcom_keys/public_keys/'
|
|||
SLIDE_STAGING_PATH = 'test/staging/'
|
||||
|
||||
DE_GFM_BINARY = '/usr/local/bin/de-gfm'
|
||||
|
||||
for storagename in MORE_STORAGE_NAMES:
|
||||
STORAGES[storagename] = {
|
||||
"BACKEND": "ietf.doc.storage_backends.CustomS3Storage",
|
||||
"OPTIONS": dict(
|
||||
endpoint_url="http://blobstore:9000",
|
||||
access_key="minio_root",
|
||||
secret_key="minio_pass",
|
||||
security_token=None,
|
||||
client_config=botocore.config.Config(
|
||||
signature_version="s3v4",
|
||||
connect_timeout=BLOBSTORAGE_CONNECT_TIMEOUT,
|
||||
read_timeout=BLOBSTORAGE_READ_TIMEOUT,
|
||||
retries={"total_max_attempts": BLOBSTORAGE_MAX_ATTEMPTS},
|
||||
),
|
||||
verify=False,
|
||||
bucket_name=f"test-{storagename}",
|
||||
),
|
||||
}
|
||||
|
|
|
@ -28,5 +28,8 @@ services:
|
|||
volumes:
|
||||
- postgresdb-data:/var/lib/postgresql/data
|
||||
|
||||
blobstore:
|
||||
image: ghcr.io/ietf-tools/datatracker-devblobstore:latest
|
||||
|
||||
volumes:
|
||||
postgresdb-data:
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
# Copyright The IETF Trust 2007-2019, All Rights Reserved
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import STORAGES, MORE_STORAGE_NAMES, BLOBSTORAGE_CONNECT_TIMEOUT, BLOBSTORAGE_READ_TIMEOUT, BLOBSTORAGE_MAX_ATTEMPTS
|
||||
import botocore.config
|
||||
|
||||
ALLOWED_HOSTS = ['*']
|
||||
|
||||
|
@ -65,3 +67,22 @@ NOMCOM_PUBLIC_KEYS_DIR = 'data/nomcom_keys/public_keys/'
|
|||
SLIDE_STAGING_PATH = 'test/staging/'
|
||||
|
||||
DE_GFM_BINARY = '/usr/local/bin/de-gfm'
|
||||
|
||||
for storagename in MORE_STORAGE_NAMES:
|
||||
STORAGES[storagename] = {
|
||||
"BACKEND": "ietf.doc.storage_backends.CustomS3Storage",
|
||||
"OPTIONS": dict(
|
||||
endpoint_url="http://blobstore:9000",
|
||||
access_key="minio_root",
|
||||
secret_key="minio_pass",
|
||||
security_token=None,
|
||||
client_config=botocore.config.Config(
|
||||
signature_version="s3v4",
|
||||
connect_timeout=BLOBSTORAGE_CONNECT_TIMEOUT,
|
||||
read_timeout=BLOBSTORAGE_READ_TIMEOUT,
|
||||
retries={"total_max_attempts": BLOBSTORAGE_MAX_ATTEMPTS},
|
||||
),
|
||||
verify=False,
|
||||
bucket_name=f"test-{storagename}",
|
||||
),
|
||||
}
|
||||
|
|
|
@ -15,6 +15,7 @@ services:
|
|||
depends_on:
|
||||
- db
|
||||
- mq
|
||||
- blobstore
|
||||
|
||||
ipc: host
|
||||
|
||||
|
@ -83,6 +84,14 @@ services:
|
|||
- .:/workspace
|
||||
- app-assets:/assets
|
||||
|
||||
blobstore:
|
||||
image: ghcr.io/ietf-tools/datatracker-devblobstore:latest
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- "minio-data:/data"
|
||||
|
||||
|
||||
|
||||
# Celery Beat is a periodic task runner. It is not normally needed for development,
|
||||
# but can be enabled by uncommenting the following.
|
||||
#
|
||||
|
@ -106,3 +115,4 @@ services:
|
|||
volumes:
|
||||
postgresdb-data:
|
||||
app-assets:
|
||||
minio-data:
|
||||
|
|
|
@ -43,8 +43,8 @@ RUN rm -rf /tmp/library-scripts
|
|||
# Copy the startup file
|
||||
COPY docker/scripts/app-init.sh /docker-init.sh
|
||||
COPY docker/scripts/app-start.sh /docker-start.sh
|
||||
RUN sed -i 's/\r$//' /docker-init.sh && chmod +x /docker-init.sh
|
||||
RUN sed -i 's/\r$//' /docker-start.sh && chmod +x /docker-start.sh
|
||||
RUN sed -i 's/\r$//' /docker-init.sh && chmod +rx /docker-init.sh
|
||||
RUN sed -i 's/\r$//' /docker-start.sh && chmod +rx /docker-start.sh
|
||||
|
||||
# Fix user UID / GID to match host
|
||||
RUN groupmod --gid $USER_GID $USERNAME \
|
||||
|
|
|
@ -1,11 +1,13 @@
|
|||
# Copyright The IETF Trust 2007-2019, All Rights Reserved
|
||||
# Copyright The IETF Trust 2007-2025, All Rights Reserved
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import STORAGES, MORE_STORAGE_NAMES, BLOBSTORAGE_CONNECT_TIMEOUT, BLOBSTORAGE_READ_TIMEOUT, BLOBSTORAGE_MAX_ATTEMPTS
|
||||
import botocore.config
|
||||
|
||||
ALLOWED_HOSTS = ['*']
|
||||
|
||||
from ietf.settings_postgresqldb import DATABASES # pyflakes:ignore
|
||||
from ietf.settings_postgresqldb import DATABASES # pyflakes:ignore
|
||||
|
||||
IDSUBMIT_IDNITS_BINARY = "/usr/local/bin/idnits"
|
||||
IDSUBMIT_STAGING_PATH = "/assets/www6s/staging/"
|
||||
|
@ -37,6 +39,25 @@ INTERNAL_IPS = [".".join(ip.split(".")[:-1] + ["1"]) for ip in ips] + ['127.0.0.
|
|||
# DEV_TEMPLATE_CONTEXT_PROCESSORS = [
|
||||
# 'ietf.context_processors.sql_debug',
|
||||
# ]
|
||||
for storagename in MORE_STORAGE_NAMES:
|
||||
STORAGES[storagename] = {
|
||||
"BACKEND": "ietf.doc.storage_backends.CustomS3Storage",
|
||||
"OPTIONS": dict(
|
||||
endpoint_url="http://blobstore:9000",
|
||||
access_key="minio_root",
|
||||
secret_key="minio_pass",
|
||||
security_token=None,
|
||||
client_config=botocore.config.Config(
|
||||
signature_version="s3v4",
|
||||
connect_timeout=BLOBSTORAGE_CONNECT_TIMEOUT,
|
||||
read_timeout=BLOBSTORAGE_READ_TIMEOUT,
|
||||
retries={"total_max_attempts": BLOBSTORAGE_MAX_ATTEMPTS},
|
||||
),
|
||||
verify=False,
|
||||
bucket_name=storagename,
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
DOCUMENT_PATH_PATTERN = '/assets/ietfdata/doc/{doc.type_id}/'
|
||||
INTERNET_DRAFT_PATH = '/assets/ietf-ftp/internet-drafts/'
|
||||
|
|
|
@ -16,6 +16,10 @@ services:
|
|||
pgadmin:
|
||||
ports:
|
||||
- '5433'
|
||||
blobstore:
|
||||
ports:
|
||||
- '9000'
|
||||
- '9001'
|
||||
celery:
|
||||
volumes:
|
||||
- .:/workspace
|
||||
|
|
28
docker/scripts/app-configure-blobstore.py
Executable file
28
docker/scripts/app-configure-blobstore.py
Executable file
|
@ -0,0 +1,28 @@
|
|||
#!/usr/bin/env python
|
||||
# Copyright The IETF Trust 2024, All Rights Reserved
|
||||
|
||||
import boto3
|
||||
import os
|
||||
import sys
|
||||
|
||||
from ietf.settings import MORE_STORAGE_NAMES
|
||||
|
||||
|
||||
def init_blobstore():
|
||||
blobstore = boto3.resource(
|
||||
"s3",
|
||||
endpoint_url=os.environ.get("BLOB_STORE_ENDPOINT_URL", "http://blobstore:9000"),
|
||||
aws_access_key_id=os.environ.get("BLOB_STORE_ACCESS_KEY", "minio_root"),
|
||||
aws_secret_access_key=os.environ.get("BLOB_STORE_SECRET_KEY", "minio_pass"),
|
||||
aws_session_token=None,
|
||||
config=botocore.config.Config(signature_version="s3v4"),
|
||||
verify=False,
|
||||
)
|
||||
for bucketname in MORE_STORAGE_NAMES:
|
||||
blobstore.create_bucket(
|
||||
Bucket=f"{os.environ.get('BLOB_STORE_BUCKET_PREFIX', '')}{bucketname}".strip()
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(init_blobstore())
|
|
@ -73,6 +73,11 @@ echo "Creating data directories..."
|
|||
chmod +x ./docker/scripts/app-create-dirs.sh
|
||||
./docker/scripts/app-create-dirs.sh
|
||||
|
||||
# Configure the development blobstore
|
||||
|
||||
echo "Configuring blobstore..."
|
||||
PYTHONPATH=/workspace python ./docker/scripts/app-configure-blobstore.py
|
||||
|
||||
# Download latest coverage results file
|
||||
|
||||
echo "Downloading latest coverage results file..."
|
||||
|
|
|
@ -25,6 +25,7 @@ from tastypie.test import ResourceTestCaseMixin
|
|||
import debug # pyflakes:ignore
|
||||
|
||||
import ietf
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.doc.utils import get_unicode_document_content
|
||||
from ietf.doc.models import RelatedDocument, State
|
||||
from ietf.doc.factories import IndividualDraftFactory, WgDraftFactory, WgRfcFactory
|
||||
|
@ -553,6 +554,10 @@ class CustomApiTests(TestCase):
|
|||
newdoc = session.presentations.get(document__type_id=type_id).document
|
||||
newdoccontent = get_unicode_document_content(newdoc.name, Path(session.meeting.get_materials_path()) / type_id / newdoc.uploaded_filename)
|
||||
self.assertEqual(json.loads(content), json.loads(newdoccontent))
|
||||
self.assertEqual(
|
||||
json.loads(retrieve_str(type_id, newdoc.uploaded_filename)),
|
||||
json.loads(content)
|
||||
)
|
||||
|
||||
def test_api_upload_bluesheet(self):
|
||||
url = urlreverse("ietf.meeting.views.api_upload_bluesheet")
|
||||
|
|
|
@ -12,7 +12,7 @@ from .models import (StateType, State, RelatedDocument, DocumentAuthor, Document
|
|||
TelechatDocEvent, BallotPositionDocEvent, ReviewRequestDocEvent, InitialReviewDocEvent,
|
||||
AddedMessageEvent, SubmissionDocEvent, DeletedEvent, EditedAuthorsDocEvent, DocumentURL,
|
||||
ReviewAssignmentDocEvent, IanaExpertDocEvent, IRSGBallotDocEvent, DocExtResource, DocumentActionHolder,
|
||||
BofreqEditorDocEvent, BofreqResponsibleDocEvent )
|
||||
BofreqEditorDocEvent, BofreqResponsibleDocEvent, StoredObject )
|
||||
|
||||
from ietf.utils.validators import validate_external_resource_value
|
||||
|
||||
|
@ -218,3 +218,9 @@ class DocExtResourceAdmin(admin.ModelAdmin):
|
|||
search_fields = ['doc__name', 'value', 'display_name', 'name__slug',]
|
||||
raw_id_fields = ['doc', ]
|
||||
admin.site.register(DocExtResource, DocExtResourceAdmin)
|
||||
|
||||
class StoredObjectAdmin(admin.ModelAdmin):
|
||||
list_display = ['store', 'name', 'modified', 'deleted']
|
||||
list_filter = ['deleted']
|
||||
search_fields = ['store', 'name', 'doc_name', 'doc_rev', 'deleted']
|
||||
admin.site.register(StoredObject, StoredObjectAdmin)
|
||||
|
|
|
@ -13,6 +13,7 @@ from pathlib import Path
|
|||
|
||||
from typing import List, Optional # pyflakes:ignore
|
||||
|
||||
from ietf.doc.storage_utils import exists_in_storage, remove_from_storage
|
||||
from ietf.doc.utils import update_action_holders
|
||||
from ietf.utils import log
|
||||
from ietf.utils.mail import send_mail
|
||||
|
@ -156,11 +157,17 @@ def move_draft_files_to_archive(doc, rev):
|
|||
if mark.exists():
|
||||
mark.unlink()
|
||||
|
||||
def remove_from_active_draft_storage(file):
|
||||
# Assumes the glob will never find a file with no suffix
|
||||
ext = file.suffix[1:]
|
||||
remove_from_storage("active-draft", f"{ext}/{file.name}", warn_if_missing=False)
|
||||
|
||||
# Note that the object is already in the "draft" storage.
|
||||
src_dir = Path(settings.INTERNET_DRAFT_PATH)
|
||||
for file in src_dir.glob("%s-%s.*" % (doc.name, rev)):
|
||||
move_file(str(file.name))
|
||||
remove_ftp_copy(str(file.name))
|
||||
remove_from_active_draft_storage(file)
|
||||
|
||||
def expire_draft(doc):
|
||||
# clean up files
|
||||
|
@ -218,6 +225,13 @@ def clean_up_draft_files():
|
|||
mark = Path(settings.FTP_DIR) / "internet-drafts" / basename
|
||||
if mark.exists():
|
||||
mark.unlink()
|
||||
if ext:
|
||||
# Note that we're not moving these strays anywhere - the assumption
|
||||
# is that the active-draft blobstore will not get strays.
|
||||
# See, however, the note about "major system failures" at "unknown_ids"
|
||||
blobname = f"{ext[1:]}/{basename}"
|
||||
if exists_in_storage("active-draft", blobname):
|
||||
remove_from_storage("active-draft", blobname)
|
||||
|
||||
try:
|
||||
doc = Document.objects.get(name=filename, rev=revision)
|
||||
|
|
|
@ -0,0 +1,66 @@
|
|||
# Copyright The IETF Trust 2025, All Rights Reserved
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
("doc", "0024_remove_ad_is_watching_states"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name="StoredObject",
|
||||
fields=[
|
||||
(
|
||||
"id",
|
||||
models.AutoField(
|
||||
auto_created=True,
|
||||
primary_key=True,
|
||||
serialize=False,
|
||||
verbose_name="ID",
|
||||
),
|
||||
),
|
||||
("store", models.CharField(max_length=256)),
|
||||
("name", models.CharField(max_length=1024)),
|
||||
("sha384", models.CharField(max_length=96)),
|
||||
("len", models.PositiveBigIntegerField()),
|
||||
(
|
||||
"store_created",
|
||||
models.DateTimeField(
|
||||
help_text="The instant the object ws first placed in the store"
|
||||
),
|
||||
),
|
||||
(
|
||||
"created",
|
||||
models.DateTimeField(
|
||||
help_text="Instant object became known. May not be the same as the storage's created value for the instance. It will hold ctime for objects imported from older disk storage"
|
||||
),
|
||||
),
|
||||
(
|
||||
"modified",
|
||||
models.DateTimeField(
|
||||
help_text="Last instant object was modified. May not be the same as the storage's modified value for the instance. It will hold mtime for objects imported from older disk storage unless they've actually been overwritten more recently"
|
||||
),
|
||||
),
|
||||
("doc_name", models.CharField(blank=True, max_length=255, null=True)),
|
||||
("doc_rev", models.CharField(blank=True, max_length=16, null=True)),
|
||||
("deleted", models.DateTimeField(null=True)),
|
||||
],
|
||||
options={
|
||||
"indexes": [
|
||||
models.Index(
|
||||
fields=["doc_name", "doc_rev"],
|
||||
name="doc_storedo_doc_nam_d04465_idx",
|
||||
)
|
||||
],
|
||||
},
|
||||
),
|
||||
migrations.AddConstraint(
|
||||
model_name="storedobject",
|
||||
constraint=models.UniqueConstraint(
|
||||
fields=("store", "name"), name="unique_name_per_store"
|
||||
),
|
||||
),
|
||||
]
|
|
@ -9,14 +9,16 @@ import os
|
|||
import django.db
|
||||
import rfc2html
|
||||
|
||||
from io import BufferedReader
|
||||
from pathlib import Path
|
||||
from lxml import etree
|
||||
from typing import Optional, TYPE_CHECKING
|
||||
from typing import Optional, Protocol, TYPE_CHECKING, Union
|
||||
from weasyprint import HTML as wpHTML
|
||||
from weasyprint.text.fonts import FontConfiguration
|
||||
|
||||
from django.db import models
|
||||
from django.core import checks
|
||||
from django.core.files.base import File
|
||||
from django.core.cache import caches
|
||||
from django.core.validators import URLValidator, RegexValidator
|
||||
from django.urls import reverse as urlreverse
|
||||
|
@ -30,6 +32,11 @@ from django.contrib.staticfiles import finders
|
|||
import debug # pyflakes:ignore
|
||||
|
||||
from ietf.group.models import Group
|
||||
from ietf.doc.storage_utils import (
|
||||
store_str as utils_store_str,
|
||||
store_bytes as utils_store_bytes,
|
||||
store_file as utils_store_file
|
||||
)
|
||||
from ietf.name.models import ( DocTypeName, DocTagName, StreamName, IntendedStdLevelName, StdLevelName,
|
||||
DocRelationshipName, DocReminderTypeName, BallotPositionName, ReviewRequestStateName, ReviewAssignmentStateName, FormalLanguageName,
|
||||
DocUrlTagName, ExtResourceName)
|
||||
|
@ -714,10 +721,52 @@ class DocumentInfo(models.Model):
|
|||
if self.type_id == "rfc" and self.came_from_draft():
|
||||
refs_to |= self.came_from_draft().referenced_by_rfcs()
|
||||
return refs_to
|
||||
|
||||
|
||||
class Meta:
|
||||
abstract = True
|
||||
|
||||
|
||||
class HasNameRevAndTypeIdProtocol(Protocol):
|
||||
"""Typing Protocol describing a class that has name, rev, and type_id properties"""
|
||||
@property
|
||||
def name(self) -> str: ...
|
||||
@property
|
||||
def rev(self) -> str: ...
|
||||
@property
|
||||
def type_id(self) -> str: ...
|
||||
|
||||
|
||||
class StorableMixin:
|
||||
"""Mixin that adds storage helpers to a DocumentInfo subclass"""
|
||||
def store_str(
|
||||
self: HasNameRevAndTypeIdProtocol,
|
||||
name: str,
|
||||
content: str,
|
||||
allow_overwrite: bool = False
|
||||
) -> None:
|
||||
return utils_store_str(self.type_id, name, content, allow_overwrite, self.name, self.rev)
|
||||
|
||||
def store_bytes(
|
||||
self: HasNameRevAndTypeIdProtocol,
|
||||
name: str,
|
||||
content: bytes,
|
||||
allow_overwrite: bool = False,
|
||||
doc_name: Optional[str] = None,
|
||||
doc_rev: Optional[str] = None
|
||||
) -> None:
|
||||
return utils_store_bytes(self.type_id, name, content, allow_overwrite, self.name, self.rev)
|
||||
|
||||
def store_file(
|
||||
self: HasNameRevAndTypeIdProtocol,
|
||||
name: str,
|
||||
file: Union[File, BufferedReader],
|
||||
allow_overwrite: bool = False,
|
||||
doc_name: Optional[str] = None,
|
||||
doc_rev: Optional[str] = None
|
||||
) -> None:
|
||||
return utils_store_file(self.type_id, name, file, allow_overwrite, self.name, self.rev)
|
||||
|
||||
|
||||
STATUSCHANGE_RELATIONS = ('tops','tois','tohist','toinf','tobcp','toexp')
|
||||
|
||||
class RelatedDocument(models.Model):
|
||||
|
@ -870,7 +919,7 @@ validate_docname = RegexValidator(
|
|||
'invalid'
|
||||
)
|
||||
|
||||
class Document(DocumentInfo):
|
||||
class Document(StorableMixin, DocumentInfo):
|
||||
name = models.CharField(max_length=255, validators=[validate_docname,], unique=True) # immutable
|
||||
|
||||
action_holders = models.ManyToManyField(Person, through=DocumentActionHolder, blank=True)
|
||||
|
@ -1192,7 +1241,7 @@ class DocHistoryAuthor(DocumentAuthorInfo):
|
|||
def __str__(self):
|
||||
return u"%s %s (%s)" % (self.document.doc.name, self.person, self.order)
|
||||
|
||||
class DocHistory(DocumentInfo):
|
||||
class DocHistory(StorableMixin, DocumentInfo):
|
||||
doc = ForeignKey(Document, related_name="history_set")
|
||||
|
||||
name = models.CharField(max_length=255)
|
||||
|
@ -1538,3 +1587,31 @@ class BofreqEditorDocEvent(DocEvent):
|
|||
class BofreqResponsibleDocEvent(DocEvent):
|
||||
""" Capture the responsible leadership (IAB and IESG members) for a BOF Request """
|
||||
responsible = models.ManyToManyField('person.Person', blank=True)
|
||||
|
||||
class StoredObject(models.Model):
|
||||
"""Hold metadata about objects placed in object storage"""
|
||||
|
||||
store = models.CharField(max_length=256)
|
||||
name = models.CharField(max_length=1024, null=False, blank=False) # N.B. the 1024 limit on name comes from S3
|
||||
sha384 = models.CharField(max_length=96)
|
||||
len = models.PositiveBigIntegerField()
|
||||
store_created = models.DateTimeField(help_text="The instant the object ws first placed in the store")
|
||||
created = models.DateTimeField(
|
||||
null=False,
|
||||
help_text="Instant object became known. May not be the same as the storage's created value for the instance. It will hold ctime for objects imported from older disk storage"
|
||||
)
|
||||
modified = models.DateTimeField(
|
||||
null=False,
|
||||
help_text="Last instant object was modified. May not be the same as the storage's modified value for the instance. It will hold mtime for objects imported from older disk storage unless they've actually been overwritten more recently"
|
||||
)
|
||||
doc_name = models.CharField(max_length=255, null=True, blank=True)
|
||||
doc_rev = models.CharField(max_length=16, null=True, blank=True)
|
||||
deleted = models.DateTimeField(null=True)
|
||||
|
||||
class Meta:
|
||||
constraints = [
|
||||
models.UniqueConstraint(fields=['store', 'name'], name='unique_name_per_store'),
|
||||
]
|
||||
indexes = [
|
||||
models.Index(fields=["doc_name", "doc_rev"]),
|
||||
]
|
||||
|
|
|
@ -18,7 +18,7 @@ from ietf.doc.models import (BallotType, DeletedEvent, StateType, State, Documen
|
|||
RelatedDocHistory, BallotPositionDocEvent, AddedMessageEvent, SubmissionDocEvent,
|
||||
ReviewRequestDocEvent, ReviewAssignmentDocEvent, EditedAuthorsDocEvent, DocumentURL,
|
||||
IanaExpertDocEvent, IRSGBallotDocEvent, DocExtResource, DocumentActionHolder,
|
||||
BofreqEditorDocEvent,BofreqResponsibleDocEvent)
|
||||
BofreqEditorDocEvent, BofreqResponsibleDocEvent, StoredObject)
|
||||
|
||||
from ietf.name.resources import BallotPositionNameResource, DocTypeNameResource
|
||||
class BallotTypeResource(ModelResource):
|
||||
|
@ -842,3 +842,26 @@ class BofreqResponsibleDocEventResource(ModelResource):
|
|||
"responsible": ALL_WITH_RELATIONS,
|
||||
}
|
||||
api.doc.register(BofreqResponsibleDocEventResource())
|
||||
|
||||
|
||||
class StoredObjectResource(ModelResource):
|
||||
class Meta:
|
||||
queryset = StoredObject.objects.all()
|
||||
serializer = api.Serializer()
|
||||
cache = SimpleCache()
|
||||
#resource_name = 'storedobject'
|
||||
ordering = ['id', ]
|
||||
filtering = {
|
||||
"id": ALL,
|
||||
"store": ALL,
|
||||
"name": ALL,
|
||||
"sha384": ALL,
|
||||
"len": ALL,
|
||||
"store_created": ALL,
|
||||
"created": ALL,
|
||||
"modified": ALL,
|
||||
"doc_name": ALL,
|
||||
"doc_rev": ALL,
|
||||
"deleted": ALL,
|
||||
}
|
||||
api.doc.register(StoredObjectResource())
|
||||
|
|
192
ietf/doc/storage_backends.py
Normal file
192
ietf/doc/storage_backends.py
Normal file
|
@ -0,0 +1,192 @@
|
|||
# Copyright The IETF Trust 2025, All Rights Reserved
|
||||
|
||||
import debug # pyflakes:ignore
|
||||
import json
|
||||
|
||||
from contextlib import contextmanager
|
||||
from hashlib import sha384
|
||||
from io import BufferedReader
|
||||
from storages.backends.s3 import S3Storage
|
||||
from typing import Optional, Union
|
||||
|
||||
from django.core.files.base import File
|
||||
|
||||
from ietf.doc.models import StoredObject
|
||||
from ietf.utils.log import log
|
||||
from ietf.utils.timezone import timezone
|
||||
|
||||
|
||||
@contextmanager
|
||||
def maybe_log_timing(enabled, op, **kwargs):
|
||||
"""If enabled, log elapsed time and additional data from kwargs
|
||||
|
||||
Emits log even if an exception occurs
|
||||
"""
|
||||
before = timezone.now()
|
||||
exception = None
|
||||
try:
|
||||
yield
|
||||
except Exception as err:
|
||||
exception = err
|
||||
raise
|
||||
finally:
|
||||
if enabled:
|
||||
dt = timezone.now() - before
|
||||
log(
|
||||
json.dumps(
|
||||
{
|
||||
"log": "S3Storage_timing",
|
||||
"seconds": dt.total_seconds(),
|
||||
"op": op,
|
||||
"exception": "" if exception is None else repr(exception),
|
||||
**kwargs,
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
# TODO-BLOBSTORE
|
||||
# Consider overriding save directly so that
|
||||
# we capture metadata for, e.g., ImageField objects
|
||||
class CustomS3Storage(S3Storage):
|
||||
|
||||
def __init__(self, **settings):
|
||||
self.in_flight_custom_metadata = {} # type is Dict[str, Dict[str, str]]
|
||||
super().__init__(**settings)
|
||||
|
||||
def get_default_settings(self):
|
||||
# add a default for the ietf_log_blob_timing boolean
|
||||
return super().get_default_settings() | {"ietf_log_blob_timing": False}
|
||||
|
||||
def _save(self, name, content):
|
||||
with maybe_log_timing(
|
||||
self.ietf_log_blob_timing, "_save", bucket_name=self.bucket_name, name=name
|
||||
):
|
||||
return super()._save(name, content)
|
||||
|
||||
def _open(self, name, mode="rb"):
|
||||
with maybe_log_timing(
|
||||
self.ietf_log_blob_timing,
|
||||
"_open",
|
||||
bucket_name=self.bucket_name,
|
||||
name=name,
|
||||
mode=mode,
|
||||
):
|
||||
return super()._open(name, mode)
|
||||
|
||||
def delete(self, name):
|
||||
with maybe_log_timing(
|
||||
self.ietf_log_blob_timing, "delete", bucket_name=self.bucket_name, name=name
|
||||
):
|
||||
super().delete(name)
|
||||
|
||||
def store_file(
|
||||
self,
|
||||
kind: str,
|
||||
name: str,
|
||||
file: Union[File, BufferedReader],
|
||||
allow_overwrite: bool = False,
|
||||
doc_name: Optional[str] = None,
|
||||
doc_rev: Optional[str] = None,
|
||||
):
|
||||
is_new = not self.exists_in_storage(kind, name)
|
||||
# debug.show('f"Asked to store {name} in {kind}: is_new={is_new}, allow_overwrite={allow_overwrite}"')
|
||||
if not allow_overwrite and not is_new:
|
||||
log(f"Failed to save {kind}:{name} - name already exists in store")
|
||||
debug.show('f"Failed to save {kind}:{name} - name already exists in store"')
|
||||
# raise Exception("Not ignoring overwrite attempts while testing")
|
||||
else:
|
||||
try:
|
||||
new_name = self.save(name, file)
|
||||
now = timezone.now()
|
||||
record, created = StoredObject.objects.get_or_create(
|
||||
store=kind,
|
||||
name=name,
|
||||
defaults=dict(
|
||||
sha384=self.in_flight_custom_metadata[name]["sha384"],
|
||||
len=int(self.in_flight_custom_metadata[name]["len"]),
|
||||
store_created=now,
|
||||
created=now,
|
||||
modified=now,
|
||||
doc_name=doc_name, # Note that these are assumed to be invariant
|
||||
doc_rev=doc_rev, # for a given name
|
||||
),
|
||||
)
|
||||
if not created:
|
||||
record.sha384 = self.in_flight_custom_metadata[name]["sha384"]
|
||||
record.len = int(self.in_flight_custom_metadata[name]["len"])
|
||||
record.modified = now
|
||||
record.deleted = None
|
||||
record.save()
|
||||
if new_name != name:
|
||||
complaint = f"Error encountered saving '{name}' - results stored in '{new_name}' instead."
|
||||
log(complaint)
|
||||
debug.show("complaint")
|
||||
# Note that we are otherwise ignoring this condition - it should become an error later.
|
||||
except Exception as e:
|
||||
# Log and then swallow the exception while we're learning.
|
||||
# Don't let failure pass so quietly when these are the autoritative bits.
|
||||
complaint = f"Failed to save {kind}:{name}"
|
||||
log(complaint, e)
|
||||
debug.show('f"{complaint}: {e}"')
|
||||
finally:
|
||||
del self.in_flight_custom_metadata[name]
|
||||
return None
|
||||
|
||||
def exists_in_storage(self, kind: str, name: str) -> bool:
|
||||
try:
|
||||
# open is realized with a HEAD
|
||||
# See https://github.com/jschneier/django-storages/blob/b79ea310201e7afd659fe47e2882fe59aae5b517/storages/backends/s3.py#L528
|
||||
with self.open(name):
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
return False
|
||||
|
||||
def remove_from_storage(
|
||||
self, kind: str, name: str, warn_if_missing: bool = True
|
||||
) -> None:
|
||||
now = timezone.now()
|
||||
try:
|
||||
with self.open(name):
|
||||
pass
|
||||
self.delete(name)
|
||||
# debug.show('f"deleted {name} from {kind} storage"')
|
||||
except FileNotFoundError:
|
||||
if warn_if_missing:
|
||||
complaint = (
|
||||
f"WARNING: Asked to delete non-existent {name} from {kind} storage"
|
||||
)
|
||||
log(complaint)
|
||||
debug.show("complaint")
|
||||
existing_record = StoredObject.objects.filter(store=kind, name=name)
|
||||
if not existing_record.exists() and warn_if_missing:
|
||||
complaint = f"WARNING: Asked to delete {name} from {kind} storage, but there was no matching StorageObject"
|
||||
log(complaint)
|
||||
debug.show("complaint")
|
||||
else:
|
||||
# Note that existing_record is a queryset that will have one matching object
|
||||
existing_record.filter(deleted__isnull=True).update(deleted=now)
|
||||
|
||||
def _get_write_parameters(self, name, content=None):
|
||||
# debug.show('f"getting write parameters for {name}"')
|
||||
params = super()._get_write_parameters(name, content)
|
||||
if "Metadata" not in params:
|
||||
params["Metadata"] = {}
|
||||
try:
|
||||
content.seek(0)
|
||||
except AttributeError: # TODO-BLOBSTORE
|
||||
debug.say("Encountered Non-Seekable content")
|
||||
raise NotImplementedError("cannot handle unseekable content")
|
||||
content_bytes = content.read()
|
||||
if not isinstance(
|
||||
content_bytes, bytes
|
||||
): # TODO-BLOBSTORE: This is sketch-development only -remove before committing
|
||||
raise Exception(f"Expected bytes - got {type(content_bytes)}")
|
||||
content.seek(0)
|
||||
metadata = {
|
||||
"len": f"{len(content_bytes)}",
|
||||
"sha384": f"{sha384(content_bytes).hexdigest()}",
|
||||
}
|
||||
params["Metadata"].update(metadata)
|
||||
self.in_flight_custom_metadata[name] = metadata
|
||||
return params
|
103
ietf/doc/storage_utils.py
Normal file
103
ietf/doc/storage_utils.py
Normal file
|
@ -0,0 +1,103 @@
|
|||
# Copyright The IETF Trust 2025, All Rights Reserved
|
||||
|
||||
from io import BufferedReader
|
||||
from typing import Optional, Union
|
||||
import debug # pyflakes ignore
|
||||
|
||||
from django.conf import settings
|
||||
from django.core.files.base import ContentFile, File
|
||||
from django.core.files.storage import storages
|
||||
|
||||
|
||||
# TODO-BLOBSTORE (Future, maybe after leaving 3.9) : add a return type
|
||||
def _get_storage(kind: str):
|
||||
|
||||
if kind in settings.MORE_STORAGE_NAMES:
|
||||
# TODO-BLOBSTORE - add a checker that verifies configuration will only return CustomS3Storages
|
||||
return storages[kind]
|
||||
else:
|
||||
debug.say(f"Got into not-implemented looking for {kind}")
|
||||
raise NotImplementedError(f"Don't know how to store {kind}")
|
||||
|
||||
|
||||
def exists_in_storage(kind: str, name: str) -> bool:
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
store = _get_storage(kind)
|
||||
return store.exists_in_storage(kind, name)
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def remove_from_storage(kind: str, name: str, warn_if_missing: bool = True) -> None:
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
store = _get_storage(kind)
|
||||
store.remove_from_storage(kind, name, warn_if_missing)
|
||||
return None
|
||||
|
||||
|
||||
# TODO-BLOBSTORE: Try to refactor `kind` out of the signature of the methods already on the custom store (which knows its kind)
|
||||
def store_file(
|
||||
kind: str,
|
||||
name: str,
|
||||
file: Union[File, BufferedReader],
|
||||
allow_overwrite: bool = False,
|
||||
doc_name: Optional[str] = None,
|
||||
doc_rev: Optional[str] = None,
|
||||
) -> None:
|
||||
# debug.show('f"asked to store {name} into {kind}"')
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
store = _get_storage(kind)
|
||||
store.store_file(kind, name, file, allow_overwrite, doc_name, doc_rev)
|
||||
return None
|
||||
|
||||
|
||||
def store_bytes(
|
||||
kind: str,
|
||||
name: str,
|
||||
content: bytes,
|
||||
allow_overwrite: bool = False,
|
||||
doc_name: Optional[str] = None,
|
||||
doc_rev: Optional[str] = None,
|
||||
) -> None:
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
store_file(kind, name, ContentFile(content), allow_overwrite)
|
||||
return None
|
||||
|
||||
|
||||
def store_str(
|
||||
kind: str,
|
||||
name: str,
|
||||
content: str,
|
||||
allow_overwrite: bool = False,
|
||||
doc_name: Optional[str] = None,
|
||||
doc_rev: Optional[str] = None,
|
||||
) -> None:
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
content_bytes = content.encode("utf-8")
|
||||
store_bytes(kind, name, content_bytes, allow_overwrite)
|
||||
return None
|
||||
|
||||
|
||||
def retrieve_bytes(kind: str, name: str) -> bytes:
|
||||
from ietf.doc.storage_backends import maybe_log_timing
|
||||
content = b""
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
store = _get_storage(kind)
|
||||
with store.open(name) as f:
|
||||
with maybe_log_timing(
|
||||
hasattr(store, "ietf_log_blob_timing") and store.ietf_log_blob_timing,
|
||||
"read",
|
||||
bucket_name=store.bucket_name if hasattr(store, "bucket_name") else "",
|
||||
name=name,
|
||||
):
|
||||
content = f.read()
|
||||
return content
|
||||
|
||||
|
||||
def retrieve_str(kind: str, name: str) -> str:
|
||||
content = ""
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
content_bytes = retrieve_bytes(kind, name)
|
||||
# TODO-BLOBSTORE: try to decode all the different ways doc.text() does
|
||||
content = content_bytes.decode("utf-8")
|
||||
return content
|
|
@ -84,7 +84,7 @@ def generate_idnits2_rfc_status_task():
|
|||
outpath = Path(settings.DERIVED_DIR) / "idnits2-rfc-status"
|
||||
blob = generate_idnits2_rfc_status()
|
||||
try:
|
||||
outpath.write_text(blob, encoding="utf8")
|
||||
outpath.write_text(blob, encoding="utf8") # TODO-BLOBSTORE
|
||||
except Exception as e:
|
||||
log.log(f"failed to write idnits2-rfc-status: {e}")
|
||||
|
||||
|
@ -94,7 +94,7 @@ def generate_idnits2_rfcs_obsoleted_task():
|
|||
outpath = Path(settings.DERIVED_DIR) / "idnits2-rfcs-obsoleted"
|
||||
blob = generate_idnits2_rfcs_obsoleted()
|
||||
try:
|
||||
outpath.write_text(blob, encoding="utf8")
|
||||
outpath.write_text(blob, encoding="utf8") # TODO-BLOBSTORE
|
||||
except Exception as e:
|
||||
log.log(f"failed to write idnits2-rfcs-obsoleted: {e}")
|
||||
|
||||
|
|
|
@ -16,6 +16,7 @@ from django.urls import reverse as urlreverse
|
|||
from django.template.loader import render_to_string
|
||||
from django.utils import timezone
|
||||
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.group.factories import RoleFactory
|
||||
from ietf.doc.factories import BofreqFactory, NewRevisionDocEventFactory
|
||||
from ietf.doc.models import State, Document, NewRevisionDocEvent
|
||||
|
@ -340,6 +341,7 @@ This test section has some text.
|
|||
doc = reload_db_objects(doc)
|
||||
self.assertEqual('%02d'%(int(rev)+1) ,doc.rev)
|
||||
self.assertEqual(f'# {username}', doc.text())
|
||||
self.assertEqual(f'# {username}', retrieve_str('bofreq',doc.get_base_name()))
|
||||
self.assertEqual(docevent_count+1, doc.docevent_set.count())
|
||||
self.assertEqual(1, len(outbox))
|
||||
rev = doc.rev
|
||||
|
@ -379,6 +381,7 @@ This test section has some text.
|
|||
self.assertEqual(list(bofreq_editors(bofreq)), [nobody])
|
||||
self.assertEqual(bofreq.latest_event(NewRevisionDocEvent).rev, '00')
|
||||
self.assertEqual(bofreq.text_or_error(), 'some stuff')
|
||||
self.assertEqual(retrieve_str('bofreq',bofreq.get_base_name()), 'some stuff')
|
||||
self.assertEqual(len(outbox),1)
|
||||
finally:
|
||||
os.unlink(file.name)
|
||||
|
|
|
@ -16,6 +16,7 @@ import debug # pyflakes:ignore
|
|||
from ietf.doc.factories import CharterFactory, NewRevisionDocEventFactory, TelechatDocEventFactory
|
||||
from ietf.doc.models import ( Document, State, BallotDocEvent, BallotType, NewRevisionDocEvent,
|
||||
TelechatDocEvent, WriteupDocEvent )
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.doc.utils_charter import ( next_revision, default_review_text, default_action_text,
|
||||
charter_name_for_group )
|
||||
from ietf.doc.utils import close_open_ballots
|
||||
|
@ -519,6 +520,11 @@ class EditCharterTests(TestCase):
|
|||
ftp_charter_path = Path(settings.FTP_DIR) / "charter" / charter_path.name
|
||||
self.assertTrue(ftp_charter_path.exists())
|
||||
self.assertTrue(charter_path.samefile(ftp_charter_path))
|
||||
blobstore_contents = retrieve_str("charter", charter.get_base_name())
|
||||
self.assertEqual(
|
||||
blobstore_contents,
|
||||
"Windows line\nMac line\nUnix line\n" + utf_8_snippet.decode("utf-8"),
|
||||
)
|
||||
|
||||
|
||||
def test_submit_initial_charter(self):
|
||||
|
|
|
@ -16,6 +16,7 @@ import debug # pyflakes:ignore
|
|||
|
||||
from ietf.doc.factories import IndividualDraftFactory, ConflictReviewFactory, RgDraftFactory
|
||||
from ietf.doc.models import Document, DocEvent, NewRevisionDocEvent, BallotPositionDocEvent, TelechatDocEvent, State, DocTagName
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.doc.utils import create_ballot_if_not_open
|
||||
from ietf.doc.views_conflict_review import default_approval_text
|
||||
from ietf.group.models import Person
|
||||
|
@ -422,6 +423,7 @@ class ConflictReviewSubmitTests(TestCase):
|
|||
f.close()
|
||||
self.assertTrue(ftp_path.exists())
|
||||
self.assertTrue( "submission-00" in doc.latest_event(NewRevisionDocEvent).desc)
|
||||
self.assertEqual(retrieve_str("conflrev",basename), "Some initial review text\n")
|
||||
|
||||
def test_subsequent_submission(self):
|
||||
doc = Document.objects.get(name='conflict-review-imaginary-irtf-submission')
|
||||
|
|
|
@ -24,6 +24,7 @@ from ietf.doc.factories import EditorialDraftFactory, IndividualDraftFactory, Wg
|
|||
from ietf.doc.models import ( Document, DocReminder, DocEvent,
|
||||
ConsensusDocEvent, LastCallDocEvent, RelatedDocument, State, TelechatDocEvent,
|
||||
WriteupDocEvent, DocRelationshipName, IanaExpertDocEvent )
|
||||
from ietf.doc.storage_utils import exists_in_storage, store_str
|
||||
from ietf.doc.utils import get_tags_for_stream_id, create_ballot_if_not_open
|
||||
from ietf.doc.views_draft import AdoptDraftForm
|
||||
from ietf.name.models import DocTagName, RoleName
|
||||
|
@ -577,6 +578,11 @@ class DraftFileMixin():
|
|||
def write_draft_file(self, name, size):
|
||||
with (Path(settings.INTERNET_DRAFT_PATH) / name).open('w') as f:
|
||||
f.write("a" * size)
|
||||
_, ext = os.path.splitext(name)
|
||||
if ext:
|
||||
ext=ext[1:]
|
||||
store_str("active-draft", f"{ext}/{name}", "a"*size, allow_overwrite=True)
|
||||
store_str("draft", f"{ext}/{name}", "a"*size, allow_overwrite=True)
|
||||
|
||||
|
||||
class ResurrectTests(DraftFileMixin, TestCase):
|
||||
|
@ -649,6 +655,7 @@ class ResurrectTests(DraftFileMixin, TestCase):
|
|||
# ensure file restored from archive directory
|
||||
self.assertTrue(os.path.exists(os.path.join(settings.INTERNET_DRAFT_PATH, txt)))
|
||||
self.assertTrue(not os.path.exists(os.path.join(settings.INTERNET_DRAFT_ARCHIVE_DIR, txt)))
|
||||
self.assertTrue(exists_in_storage("active-draft",f"txt/{txt}"))
|
||||
|
||||
|
||||
class ExpireIDsTests(DraftFileMixin, TestCase):
|
||||
|
@ -775,6 +782,7 @@ class ExpireIDsTests(DraftFileMixin, TestCase):
|
|||
self.assertEqual(draft.action_holders.count(), 0)
|
||||
self.assertIn('Removed all action holders', draft.latest_event(type='changed_action_holders').desc)
|
||||
self.assertTrue(not os.path.exists(os.path.join(settings.INTERNET_DRAFT_PATH, txt)))
|
||||
self.assertFalse(exists_in_storage("active-draft", f"txt/{txt}"))
|
||||
self.assertTrue(os.path.exists(os.path.join(settings.INTERNET_DRAFT_ARCHIVE_DIR, txt)))
|
||||
|
||||
draft.delete()
|
||||
|
@ -798,6 +806,7 @@ class ExpireIDsTests(DraftFileMixin, TestCase):
|
|||
clean_up_draft_files()
|
||||
|
||||
self.assertTrue(not os.path.exists(os.path.join(settings.INTERNET_DRAFT_PATH, unknown)))
|
||||
self.assertFalse(exists_in_storage("active-draft", f"txt/{unknown}"))
|
||||
self.assertTrue(os.path.exists(os.path.join(settings.INTERNET_DRAFT_ARCHIVE_DIR, "unknown_ids", unknown)))
|
||||
|
||||
|
||||
|
@ -808,6 +817,7 @@ class ExpireIDsTests(DraftFileMixin, TestCase):
|
|||
clean_up_draft_files()
|
||||
|
||||
self.assertTrue(not os.path.exists(os.path.join(settings.INTERNET_DRAFT_PATH, malformed)))
|
||||
self.assertFalse(exists_in_storage("active-draft", f"txt/{malformed}"))
|
||||
self.assertTrue(os.path.exists(os.path.join(settings.INTERNET_DRAFT_ARCHIVE_DIR, "unknown_ids", malformed)))
|
||||
|
||||
|
||||
|
@ -822,9 +832,11 @@ class ExpireIDsTests(DraftFileMixin, TestCase):
|
|||
clean_up_draft_files()
|
||||
|
||||
self.assertTrue(not os.path.exists(os.path.join(settings.INTERNET_DRAFT_PATH, txt)))
|
||||
self.assertFalse(exists_in_storage("active-draft", f"txt/{txt}"))
|
||||
self.assertTrue(os.path.exists(os.path.join(settings.INTERNET_DRAFT_ARCHIVE_DIR, txt)))
|
||||
|
||||
self.assertTrue(not os.path.exists(os.path.join(settings.INTERNET_DRAFT_PATH, pdf)))
|
||||
self.assertFalse(exists_in_storage("active-draft", f"pdf/{pdf}"))
|
||||
self.assertTrue(os.path.exists(os.path.join(settings.INTERNET_DRAFT_ARCHIVE_DIR, pdf)))
|
||||
|
||||
# expire draft
|
||||
|
@ -843,6 +855,7 @@ class ExpireIDsTests(DraftFileMixin, TestCase):
|
|||
clean_up_draft_files()
|
||||
|
||||
self.assertTrue(not os.path.exists(os.path.join(settings.INTERNET_DRAFT_PATH, txt)))
|
||||
self.assertFalse(exists_in_storage("active-draft", f"txt/{txt}"))
|
||||
self.assertTrue(os.path.exists(os.path.join(settings.INTERNET_DRAFT_ARCHIVE_DIR, txt)))
|
||||
|
||||
|
||||
|
|
|
@ -18,6 +18,7 @@ from django.urls import reverse as urlreverse
|
|||
from django.utils import timezone
|
||||
|
||||
from ietf.doc.models import Document, State, NewRevisionDocEvent
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.group.factories import RoleFactory
|
||||
from ietf.group.models import Group
|
||||
from ietf.meeting.factories import MeetingFactory, SessionFactory, SessionPresentationFactory
|
||||
|
@ -123,6 +124,9 @@ class GroupMaterialTests(TestCase):
|
|||
ftp_filepath=Path(settings.FTP_DIR) / "slides" / basename
|
||||
with ftp_filepath.open() as f:
|
||||
self.assertEqual(f.read(), content)
|
||||
# This test is very sloppy wrt the actual file content.
|
||||
# Working with/around that for the moment.
|
||||
self.assertEqual(retrieve_str("slides", basename), content)
|
||||
|
||||
# check that posting same name is prevented
|
||||
test_file.seek(0)
|
||||
|
@ -237,4 +241,6 @@ class GroupMaterialTests(TestCase):
|
|||
|
||||
with io.open(os.path.join(doc.get_file_path(), doc.name + "-" + doc.rev + ".txt")) as f:
|
||||
self.assertEqual(f.read(), content)
|
||||
self.assertEqual(retrieve_str("slides", f"{doc.name}-{doc.rev}.txt"), content)
|
||||
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@ from pyquery import PyQuery
|
|||
|
||||
import debug # pyflakes:ignore
|
||||
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
import ietf.review.mailarch
|
||||
|
||||
from ietf.doc.factories import ( NewRevisionDocEventFactory, IndividualDraftFactory, WgDraftFactory,
|
||||
|
@ -63,6 +64,10 @@ class ReviewTests(TestCase):
|
|||
review_file = Path(self.review_subdir) / f"{assignment.review.name}.txt"
|
||||
content = review_file.read_text()
|
||||
self.assertEqual(content, expected_content)
|
||||
self.assertEqual(
|
||||
retrieve_str("review", review_file.name),
|
||||
expected_content
|
||||
)
|
||||
review_ftp_file = Path(settings.FTP_DIR) / "review" / review_file.name
|
||||
self.assertTrue(review_file.samefile(review_ftp_file))
|
||||
|
||||
|
|
|
@ -14,6 +14,7 @@ from django.urls import reverse as urlreverse
|
|||
|
||||
from ietf.doc.factories import StatementFactory, DocEventFactory
|
||||
from ietf.doc.models import Document, State, NewRevisionDocEvent
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.group.models import Group
|
||||
from ietf.person.factories import PersonFactory
|
||||
from ietf.utils.mail import outbox, empty_outbox
|
||||
|
@ -185,8 +186,16 @@ This test section has some text.
|
|||
self.assertEqual("%02d" % (int(rev) + 1), doc.rev)
|
||||
if postdict["statement_submission"] == "enter":
|
||||
self.assertEqual(f"# {username}", doc.text())
|
||||
self.assertEqual(
|
||||
retrieve_str("statement", f"{doc.name}-{doc.rev}.md"),
|
||||
f"# {username}"
|
||||
)
|
||||
else:
|
||||
self.assertEqual("not valid pdf", doc.text())
|
||||
self.assertEqual(
|
||||
retrieve_str("statement", f"{doc.name}-{doc.rev}.pdf"),
|
||||
"not valid pdf"
|
||||
)
|
||||
self.assertEqual(docevent_count + 1, doc.docevent_set.count())
|
||||
self.assertEqual(0, len(outbox))
|
||||
rev = doc.rev
|
||||
|
@ -255,8 +264,16 @@ This test section has some text.
|
|||
self.assertIsNotNone(statement.history_set.last().latest_event(type="published_statement"))
|
||||
if postdict["statement_submission"] == "enter":
|
||||
self.assertEqual(statement.text_or_error(), "some stuff")
|
||||
self.assertEqual(
|
||||
retrieve_str("statement", statement.uploaded_filename),
|
||||
"some stuff"
|
||||
)
|
||||
else:
|
||||
self.assertTrue(statement.uploaded_filename.endswith("pdf"))
|
||||
self.assertEqual(
|
||||
retrieve_str("statement", f"{statement.name}-{statement.rev}.pdf"),
|
||||
"not valid pdf"
|
||||
)
|
||||
self.assertEqual(len(outbox), 0)
|
||||
|
||||
existing_statement = StatementFactory()
|
||||
|
|
|
@ -19,6 +19,7 @@ from ietf.doc.factories import ( DocumentFactory, IndividualRfcFactory,
|
|||
WgRfcFactory, DocEventFactory, WgDraftFactory )
|
||||
from ietf.doc.models import ( Document, State, DocEvent,
|
||||
BallotPositionDocEvent, NewRevisionDocEvent, TelechatDocEvent, WriteupDocEvent )
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.doc.utils import create_ballot_if_not_open
|
||||
from ietf.doc.views_status_change import default_approval_text
|
||||
from ietf.group.models import Person
|
||||
|
@ -71,7 +72,7 @@ class StatusChangeTests(TestCase):
|
|||
statchg_relation_row_blah="tois")
|
||||
)
|
||||
self.assertEqual(r.status_code, 302)
|
||||
status_change = Document.objects.get(name='status-change-imaginary-new')
|
||||
status_change = Document.objects.get(name='status-change-imaginary-new')
|
||||
self.assertEqual(status_change.get_state('statchg').slug,'adrev')
|
||||
self.assertEqual(status_change.rev,'00')
|
||||
self.assertEqual(status_change.ad.name,'Areað Irector')
|
||||
|
@ -563,6 +564,8 @@ class StatusChangeSubmitTests(TestCase):
|
|||
ftp_filepath = Path(settings.FTP_DIR) / "status-changes" / basename
|
||||
self.assertFalse(filepath.exists())
|
||||
self.assertFalse(ftp_filepath.exists())
|
||||
with self.assertRaises(FileNotFoundError):
|
||||
retrieve_str("statchg",basename)
|
||||
r = self.client.post(url,dict(content="Some initial review text\n",submit_response="1"))
|
||||
self.assertEqual(r.status_code,302)
|
||||
doc = Document.objects.get(name='status-change-imaginary-mid-review')
|
||||
|
@ -571,6 +574,10 @@ class StatusChangeSubmitTests(TestCase):
|
|||
self.assertEqual(f.read(),"Some initial review text\n")
|
||||
with ftp_filepath.open() as f:
|
||||
self.assertEqual(f.read(),"Some initial review text\n")
|
||||
self.assertEqual(
|
||||
retrieve_str("statchg", basename),
|
||||
"Some initial review text\n"
|
||||
)
|
||||
self.assertTrue( "mid-review-00" in doc.latest_event(NewRevisionDocEvent).desc)
|
||||
|
||||
def test_subsequent_submission(self):
|
||||
|
@ -607,7 +614,8 @@ class StatusChangeSubmitTests(TestCase):
|
|||
self.assertContains(r, "does not appear to be a text file")
|
||||
|
||||
# sane post uploading a file
|
||||
test_file = StringIO("This is a new proposal.")
|
||||
test_content = "This is a new proposal."
|
||||
test_file = StringIO(test_content)
|
||||
test_file.name = "unnamed"
|
||||
r = self.client.post(url,dict(txt=test_file,submit_response="1"))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
|
@ -615,8 +623,12 @@ class StatusChangeSubmitTests(TestCase):
|
|||
self.assertEqual(doc.rev,'01')
|
||||
path = os.path.join(settings.STATUS_CHANGE_PATH, '%s-%s.txt' % (doc.name, doc.rev))
|
||||
with io.open(path) as f:
|
||||
self.assertEqual(f.read(),"This is a new proposal.")
|
||||
self.assertEqual(f.read(), test_content)
|
||||
f.close()
|
||||
self.assertEqual(
|
||||
retrieve_str("statchg", f"{doc.name}-{doc.rev}.txt"),
|
||||
test_content
|
||||
)
|
||||
self.assertTrue( "mid-review-01" in doc.latest_event(NewRevisionDocEvent).desc)
|
||||
|
||||
# verify reset text button works
|
||||
|
|
|
@ -1510,7 +1510,7 @@ def update_or_create_draft_bibxml_file(doc, rev):
|
|||
existing_bibxml = ""
|
||||
if normalized_bibxml.strip() != existing_bibxml.strip():
|
||||
log.log(f"Writing {ref_rev_file_path}")
|
||||
ref_rev_file_path.write_text(normalized_bibxml, encoding="utf8")
|
||||
ref_rev_file_path.write_text(normalized_bibxml, encoding="utf8") # TODO-BLOBSTORE
|
||||
|
||||
|
||||
def ensure_draft_bibxml_path_exists():
|
||||
|
|
|
@ -101,6 +101,7 @@ def submit(request, name):
|
|||
content = form.cleaned_data['bofreq_content']
|
||||
with io.open(bofreq.get_file_name(), 'w', encoding='utf-8') as destination:
|
||||
destination.write(content)
|
||||
bofreq.store_str(bofreq.get_base_name(), content)
|
||||
email_bofreq_new_revision(request, bofreq)
|
||||
return redirect('ietf.doc.views_doc.document_main', name=bofreq.name)
|
||||
|
||||
|
@ -175,6 +176,7 @@ def new_bof_request(request):
|
|||
content = form.cleaned_data['bofreq_content']
|
||||
with io.open(bofreq.get_file_name(), 'w', encoding='utf-8') as destination:
|
||||
destination.write(content)
|
||||
bofreq.store_str(bofreq.get_base_name(), content)
|
||||
email_bofreq_new_revision(request, bofreq)
|
||||
return redirect('ietf.doc.views_doc.document_main', name=bofreq.name)
|
||||
|
||||
|
|
|
@ -441,9 +441,10 @@ def submit(request, name, option=None):
|
|||
) # update rev
|
||||
with charter_filename.open("w", encoding="utf-8") as destination:
|
||||
if form.cleaned_data["txt"]:
|
||||
destination.write(form.cleaned_data["txt"])
|
||||
content=form.cleaned_data["txt"]
|
||||
else:
|
||||
destination.write(form.cleaned_data["content"])
|
||||
content=form.cleaned_data["content"]
|
||||
destination.write(content)
|
||||
# Also provide a copy to the legacy ftp source directory, which is served by rsync
|
||||
# This replaces the hardlink copy that ghostlink has made in the past
|
||||
# Still using a hardlink as long as these are on the same filesystem.
|
||||
|
@ -454,7 +455,8 @@ def submit(request, name, option=None):
|
|||
log(
|
||||
"There was an error creating a hardlink at %s pointing to %s"
|
||||
% (ftp_filename, charter_filename)
|
||||
)
|
||||
)
|
||||
charter.store_str(charter_filename.name, content)
|
||||
|
||||
|
||||
if option in ["initcharter", "recharter"] and charter.ad == None:
|
||||
|
|
|
@ -186,9 +186,10 @@ class UploadForm(forms.Form):
|
|||
filepath = Path(settings.CONFLICT_REVIEW_PATH) / basename
|
||||
with filepath.open('w', encoding='utf-8') as destination:
|
||||
if self.cleaned_data['txt']:
|
||||
destination.write(self.cleaned_data['txt'])
|
||||
content = self.cleaned_data['txt']
|
||||
else:
|
||||
destination.write(self.cleaned_data['content'])
|
||||
content = self.cleaned_data['content']
|
||||
destination.write(content)
|
||||
ftp_filepath = Path(settings.FTP_DIR) / "conflict-reviews" / basename
|
||||
try:
|
||||
os.link(filepath, ftp_filepath) # Path.hardlink_to is not available until 3.10
|
||||
|
@ -197,6 +198,7 @@ class UploadForm(forms.Form):
|
|||
"There was an error creating a hardlink at %s pointing to %s: %s"
|
||||
% (ftp_filepath, filepath, e)
|
||||
)
|
||||
review.store_str(basename, content)
|
||||
|
||||
#This is very close to submit on charter - can we get better reuse?
|
||||
@role_required('Area Director','Secretariat')
|
||||
|
|
|
@ -32,6 +32,7 @@ from ietf.doc.mails import ( email_pulled_from_rfc_queue, email_resurrect_reques
|
|||
generate_publication_request, email_adopted, email_intended_status_changed,
|
||||
email_iesg_processing_document, email_ad_approved_doc,
|
||||
email_iana_expert_review_state_changed )
|
||||
from ietf.doc.storage_utils import retrieve_bytes, store_bytes
|
||||
from ietf.doc.utils import ( add_state_change_event, can_adopt_draft, can_unadopt_draft,
|
||||
get_tags_for_stream_id, nice_consensus, update_action_holders,
|
||||
update_reminder, update_telechat, make_notify_changed_event, get_initial_notify,
|
||||
|
@ -897,6 +898,11 @@ def restore_draft_file(request, draft):
|
|||
except shutil.Error as ex:
|
||||
messages.warning(request, 'There was an error restoring the Internet-Draft file: {} ({})'.format(file, ex))
|
||||
log.log(" Exception %s when attempting to move %s" % (ex, file))
|
||||
_, ext = os.path.splitext(os.path.basename(file))
|
||||
if ext:
|
||||
ext = ext[1:]
|
||||
blobname = f"{ext}/{basename}.{ext}"
|
||||
store_bytes("active-draft", blobname, retrieve_bytes("draft", blobname))
|
||||
|
||||
|
||||
class ShepherdWriteupUploadForm(forms.Form):
|
||||
|
|
|
@ -167,6 +167,8 @@ def edit_material(request, name=None, acronym=None, action=None, doc_type=None):
|
|||
with filepath.open('wb+') as dest:
|
||||
for chunk in f.chunks():
|
||||
dest.write(chunk)
|
||||
f.seek(0)
|
||||
doc.store_file(basename, f)
|
||||
if not doc.meeting_related():
|
||||
log.assertion('doc.type_id == "slides"')
|
||||
ftp_filepath = Path(settings.FTP_DIR) / doc.type_id / basename
|
||||
|
|
|
@ -805,6 +805,7 @@ def complete_review(request, name, assignment_id=None, acronym=None):
|
|||
|
||||
review_path = Path(review.get_file_path()) / f"{review.name}.txt"
|
||||
review_path.write_text(content)
|
||||
review.store_str(f"{review.name}.txt", content, allow_overwrite=True) # We have a bug that review revisions dont create a new version!
|
||||
review_ftp_path = Path(settings.FTP_DIR) / "review" / review_path.name
|
||||
# See https://github.com/ietf-tools/datatracker/issues/6941 - when that's
|
||||
# addressed, making this link should not be conditional
|
||||
|
|
|
@ -137,12 +137,15 @@ def submit(request, name):
|
|||
mode="wb" if writing_pdf else "w"
|
||||
) as destination:
|
||||
if writing_pdf:
|
||||
for chunk in form.cleaned_data["statement_file"].chunks():
|
||||
f = form.cleaned_data["statement_file"]
|
||||
for chunk in f.chunks():
|
||||
destination.write(chunk)
|
||||
f.seek(0)
|
||||
statement.store_file(statement.uploaded_filename, f)
|
||||
else:
|
||||
destination.write(markdown_content)
|
||||
statement.store_str(statement.uploaded_filename, markdown_content)
|
||||
return redirect("ietf.doc.views_doc.document_main", name=statement.name)
|
||||
|
||||
else:
|
||||
if statement.uploaded_filename.endswith("pdf"):
|
||||
text = CONST_PDF_REV_NOTICE
|
||||
|
@ -254,10 +257,14 @@ def new_statement(request):
|
|||
mode="wb" if writing_pdf else "w"
|
||||
) as destination:
|
||||
if writing_pdf:
|
||||
for chunk in form.cleaned_data["statement_file"].chunks():
|
||||
f = form.cleaned_data["statement_file"]
|
||||
for chunk in f.chunks():
|
||||
destination.write(chunk)
|
||||
f.seek(0)
|
||||
statement.store_file(statement.uploaded_filename, f)
|
||||
else:
|
||||
destination.write(markdown_content)
|
||||
statement.store_str(statement.uploaded_filename, markdown_content)
|
||||
return redirect("ietf.doc.views_doc.document_main", name=statement.name)
|
||||
|
||||
else:
|
||||
|
|
|
@ -160,9 +160,11 @@ class UploadForm(forms.Form):
|
|||
filename = Path(settings.STATUS_CHANGE_PATH) / basename
|
||||
with io.open(filename, 'w', encoding='utf-8') as destination:
|
||||
if self.cleaned_data['txt']:
|
||||
destination.write(self.cleaned_data['txt'])
|
||||
content = self.cleaned_data['txt']
|
||||
else:
|
||||
destination.write(self.cleaned_data['content'])
|
||||
content = self.cleaned_data['content']
|
||||
destination.write(content)
|
||||
doc.store_str(basename, content)
|
||||
try:
|
||||
ftp_filename = Path(settings.FTP_DIR) / "status-changes" / basename
|
||||
os.link(filename, ftp_filename) # Path.hardlink is not available until 3.10
|
||||
|
|
|
@ -10,6 +10,7 @@ from pathlib import Path
|
|||
from django.conf import settings
|
||||
from django.template.loader import render_to_string
|
||||
|
||||
from ietf.doc.storage_utils import store_file
|
||||
from ietf.utils import log
|
||||
|
||||
from .models import Group
|
||||
|
@ -43,6 +44,11 @@ def generate_wg_charters_files_task():
|
|||
encoding="utf8",
|
||||
)
|
||||
|
||||
with charters_file.open("rb") as f:
|
||||
store_file("indexes", "1wg-charters.txt", f, allow_overwrite=True)
|
||||
with charters_by_acronym_file.open("rb") as f:
|
||||
store_file("indexes", "1wg-charters-by-acronym.txt", f, allow_overwrite=True)
|
||||
|
||||
charter_copy_dests = [
|
||||
getattr(settings, "CHARTER_COPY_PATH", None),
|
||||
getattr(settings, "CHARTER_COPY_OTHER_PATH", None),
|
||||
|
@ -102,3 +108,8 @@ def generate_wg_summary_files_task():
|
|||
),
|
||||
encoding="utf8",
|
||||
)
|
||||
|
||||
with summary_file.open("rb") as f:
|
||||
store_file("indexes", "1wg-summary.txt", f, allow_overwrite=True)
|
||||
with summary_by_acronym_file.open("rb") as f:
|
||||
store_file("indexes", "1wg-summary-by-acronym.txt", f, allow_overwrite=True)
|
||||
|
|
|
@ -29,6 +29,7 @@ from ietf.community.models import CommunityList
|
|||
from ietf.community.utils import reset_name_contains_index_for_rule
|
||||
from ietf.doc.factories import WgDraftFactory, IndividualDraftFactory, CharterFactory, BallotDocEventFactory
|
||||
from ietf.doc.models import Document, DocEvent, State
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.doc.utils_charter import charter_name_for_group
|
||||
from ietf.group.admin import GroupForm as AdminGroupForm
|
||||
from ietf.group.factories import (GroupFactory, RoleFactory, GroupEventFactory,
|
||||
|
@ -303,20 +304,26 @@ class GroupPagesTests(TestCase):
|
|||
|
||||
generate_wg_summary_files_task()
|
||||
|
||||
summary_by_area_contents = (
|
||||
Path(settings.GROUP_SUMMARY_PATH) / "1wg-summary.txt"
|
||||
).read_text(encoding="utf8")
|
||||
self.assertIn(group.parent.name, summary_by_area_contents)
|
||||
self.assertIn(group.acronym, summary_by_area_contents)
|
||||
self.assertIn(group.name, summary_by_area_contents)
|
||||
self.assertIn(chair.address, summary_by_area_contents)
|
||||
for summary_by_area_contents in [
|
||||
(
|
||||
Path(settings.GROUP_SUMMARY_PATH) / "1wg-summary.txt"
|
||||
).read_text(encoding="utf8"),
|
||||
retrieve_str("indexes", "1wg-summary.txt")
|
||||
]:
|
||||
self.assertIn(group.parent.name, summary_by_area_contents)
|
||||
self.assertIn(group.acronym, summary_by_area_contents)
|
||||
self.assertIn(group.name, summary_by_area_contents)
|
||||
self.assertIn(chair.address, summary_by_area_contents)
|
||||
|
||||
summary_by_acronym_contents = (
|
||||
Path(settings.GROUP_SUMMARY_PATH) / "1wg-summary-by-acronym.txt"
|
||||
).read_text(encoding="utf8")
|
||||
self.assertIn(group.acronym, summary_by_acronym_contents)
|
||||
self.assertIn(group.name, summary_by_acronym_contents)
|
||||
self.assertIn(chair.address, summary_by_acronym_contents)
|
||||
for summary_by_acronym_contents in [
|
||||
(
|
||||
Path(settings.GROUP_SUMMARY_PATH) / "1wg-summary-by-acronym.txt"
|
||||
).read_text(encoding="utf8"),
|
||||
retrieve_str("indexes", "1wg-summary-by-acronym.txt")
|
||||
]:
|
||||
self.assertIn(group.acronym, summary_by_acronym_contents)
|
||||
self.assertIn(group.name, summary_by_acronym_contents)
|
||||
self.assertIn(chair.address, summary_by_acronym_contents)
|
||||
|
||||
def test_chartering_groups(self):
|
||||
group = CharterFactory(group__type_id='wg',group__parent=GroupFactory(type_id='area'),states=[('charter','intrev')]).group
|
||||
|
|
|
@ -15,6 +15,8 @@ from typing import List
|
|||
|
||||
from django.conf import settings
|
||||
|
||||
from ietf.doc.storage_utils import store_file
|
||||
|
||||
from .index import all_id_txt, all_id2_txt, id_index_txt
|
||||
|
||||
|
||||
|
@ -38,6 +40,8 @@ class TempFileManager(AbstractContextManager):
|
|||
target = path / dest_path.name
|
||||
target.unlink(missing_ok=True)
|
||||
os.link(dest_path, target) # until python>=3.10
|
||||
with dest_path.open("rb") as f:
|
||||
store_file("indexes", dest_path.name, f, allow_overwrite=True)
|
||||
|
||||
def cleanup(self):
|
||||
for tf_path in self.cleanup_list:
|
||||
|
|
|
@ -15,6 +15,7 @@ import debug # pyflakes:ignore
|
|||
|
||||
from ietf.doc.factories import WgDraftFactory, RfcFactory
|
||||
from ietf.doc.models import Document, RelatedDocument, State, LastCallDocEvent, NewRevisionDocEvent
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.group.factories import GroupFactory
|
||||
from ietf.name.models import DocRelationshipName
|
||||
from ietf.idindex.index import all_id_txt, all_id2_txt, id_index_txt
|
||||
|
@ -203,5 +204,9 @@ class TaskTests(TestCase):
|
|||
self.assertFalse(path2.exists()) # left behind
|
||||
# check destination contents and permissions
|
||||
self.assertEqual(dest.read_text(), "yay")
|
||||
self.assertEqual(
|
||||
retrieve_str("indexes", "yay.txt"),
|
||||
"yay"
|
||||
)
|
||||
self.assertEqual(dest.stat().st_mode & 0o777, 0o644)
|
||||
self.assertTrue(dest.samefile(other_path / "yay.txt"))
|
||||
|
|
|
@ -379,6 +379,8 @@ class LiaisonModelForm(forms.ModelForm):
|
|||
attach_file = io.open(os.path.join(settings.LIAISON_ATTACH_PATH, attach.name + extension), 'wb')
|
||||
attach_file.write(attached_file.read())
|
||||
attach_file.close()
|
||||
attached_file.seek(0)
|
||||
attach.store_file(attach.uploaded_filename, attached_file)
|
||||
|
||||
if not self.is_new:
|
||||
# create modified event
|
||||
|
|
|
@ -19,6 +19,7 @@ from django.utils import timezone
|
|||
from io import StringIO
|
||||
from pyquery import PyQuery
|
||||
|
||||
from ietf.doc.storage_utils import retrieve_str
|
||||
from ietf.utils.test_utils import TestCase, login_testing_unauthorized
|
||||
from ietf.utils.mail import outbox
|
||||
|
||||
|
@ -414,7 +415,8 @@ class LiaisonManagementTests(TestCase):
|
|||
|
||||
# edit
|
||||
attachments_before = liaison.attachments.count()
|
||||
test_file = StringIO("hello world")
|
||||
test_content = "hello world"
|
||||
test_file = StringIO(test_content)
|
||||
test_file.name = "unnamed"
|
||||
r = self.client.post(url,
|
||||
dict(from_groups=str(from_group.pk),
|
||||
|
@ -452,9 +454,12 @@ class LiaisonManagementTests(TestCase):
|
|||
self.assertEqual(attachment.title, "attachment")
|
||||
with (Path(settings.LIAISON_ATTACH_PATH) / attachment.uploaded_filename).open() as f:
|
||||
written_content = f.read()
|
||||
self.assertEqual(written_content, test_content)
|
||||
self.assertEqual(
|
||||
retrieve_str(attachment.type_id, attachment.uploaded_filename),
|
||||
test_content,
|
||||
)
|
||||
|
||||
test_file.seek(0)
|
||||
self.assertEqual(written_content, test_file.read())
|
||||
|
||||
def test_incoming_access(self):
|
||||
'''Ensure only Secretariat, Liaison Managers, and Authorized Individuals
|
||||
|
@ -704,7 +709,8 @@ class LiaisonManagementTests(TestCase):
|
|||
|
||||
# add new
|
||||
mailbox_before = len(outbox)
|
||||
test_file = StringIO("hello world")
|
||||
test_content = "hello world"
|
||||
test_file = StringIO(test_content)
|
||||
test_file.name = "unnamed"
|
||||
from_groups = [ str(g.pk) for g in Group.objects.filter(type="sdo") ]
|
||||
to_group = Group.objects.get(acronym="mars")
|
||||
|
@ -756,6 +762,11 @@ class LiaisonManagementTests(TestCase):
|
|||
self.assertEqual(attachment.title, "attachment")
|
||||
with (Path(settings.LIAISON_ATTACH_PATH) / attachment.uploaded_filename).open() as f:
|
||||
written_content = f.read()
|
||||
self.assertEqual(written_content, test_content)
|
||||
self.assertEqual(
|
||||
retrieve_str(attachment.type_id, attachment.uploaded_filename),
|
||||
test_content
|
||||
)
|
||||
|
||||
test_file.seek(0)
|
||||
self.assertEqual(written_content, test_file.read())
|
||||
|
@ -783,7 +794,8 @@ class LiaisonManagementTests(TestCase):
|
|||
|
||||
# add new
|
||||
mailbox_before = len(outbox)
|
||||
test_file = StringIO("hello world")
|
||||
test_content = "hello world"
|
||||
test_file = StringIO(test_content)
|
||||
test_file.name = "unnamed"
|
||||
from_group = Group.objects.get(acronym="mars")
|
||||
to_group = Group.objects.filter(type="sdo")[0]
|
||||
|
@ -835,9 +847,11 @@ class LiaisonManagementTests(TestCase):
|
|||
self.assertEqual(attachment.title, "attachment")
|
||||
with (Path(settings.LIAISON_ATTACH_PATH) / attachment.uploaded_filename).open() as f:
|
||||
written_content = f.read()
|
||||
|
||||
test_file.seek(0)
|
||||
self.assertEqual(written_content, test_file.read())
|
||||
self.assertEqual(written_content, test_content)
|
||||
self.assertEqual(
|
||||
retrieve_str(attachment.type_id, attachment.uploaded_filename),
|
||||
test_content
|
||||
)
|
||||
|
||||
self.assertEqual(len(outbox), mailbox_before + 1)
|
||||
self.assertTrue("Liaison Statement" in outbox[-1]["Subject"])
|
||||
|
@ -882,7 +896,8 @@ class LiaisonManagementTests(TestCase):
|
|||
|
||||
|
||||
# get minimum edit post data
|
||||
file = StringIO('dummy file')
|
||||
test_data = "dummy file"
|
||||
file = StringIO(test_data)
|
||||
file.name = "upload.txt"
|
||||
post_data = dict(
|
||||
from_groups = ','.join([ str(x.pk) for x in liaison.from_groups.all() ]),
|
||||
|
@ -909,6 +924,11 @@ class LiaisonManagementTests(TestCase):
|
|||
self.assertEqual(liaison.attachments.count(),1)
|
||||
event = liaison.liaisonstatementevent_set.order_by('id').last()
|
||||
self.assertTrue(event.desc.startswith('Added attachment'))
|
||||
attachment = liaison.attachments.get()
|
||||
self.assertEqual(
|
||||
retrieve_str(attachment.type_id, attachment.uploaded_filename),
|
||||
test_data
|
||||
)
|
||||
|
||||
def test_liaison_edit_attachment(self):
|
||||
|
||||
|
|
|
@ -9,6 +9,7 @@ import datetime
|
|||
from django.core.files.base import ContentFile
|
||||
from django.db.models import Q
|
||||
|
||||
from ietf.doc.storage_utils import store_str
|
||||
from ietf.meeting.models import (Attended, Meeting, Session, SchedulingEvent, Schedule,
|
||||
TimeSlot, SessionPresentation, FloorPlan, Room, SlideSubmission, Constraint,
|
||||
MeetingHost, ProceedingsMaterial)
|
||||
|
@ -239,6 +240,10 @@ class SlideSubmissionFactory(factory.django.DjangoModelFactory):
|
|||
make_file = factory.PostGeneration(
|
||||
lambda obj, create, extracted, **kwargs: open(obj.staged_filepath(),'a').close()
|
||||
)
|
||||
|
||||
store_submission = factory.PostGeneration(
|
||||
lambda obj, create, extracted, **kwargs: store_str("staging", obj.filename, "")
|
||||
)
|
||||
|
||||
class ConstraintFactory(factory.django.DjangoModelFactory):
|
||||
class Meta:
|
||||
|
|
|
@ -361,6 +361,7 @@ class InterimSessionModelForm(forms.ModelForm):
|
|||
os.makedirs(directory)
|
||||
with io.open(path, "w", encoding='utf-8') as file:
|
||||
file.write(self.cleaned_data['agenda'])
|
||||
doc.store_str(doc.uploaded_filename, self.cleaned_data['agenda'])
|
||||
|
||||
|
||||
class InterimAnnounceForm(forms.ModelForm):
|
||||
|
|
|
@ -649,6 +649,11 @@ def read_session_file(type, num, doc):
|
|||
def read_agenda_file(num, doc):
|
||||
return read_session_file('agenda', num, doc)
|
||||
|
||||
# TODO-BLOBSTORE: this is _yet another_ draft derived variant created when users
|
||||
# ask for drafts from the meeting agenda page. Consider whether to refactor this
|
||||
# now to not call out to external binaries, and consider whether we need this extra
|
||||
# format at all in the draft blobstore. if so, it would probably be stored under
|
||||
# something like plainpdf/
|
||||
def convert_draft_to_pdf(doc_name):
|
||||
inpath = os.path.join(settings.IDSUBMIT_REPOSITORY_PATH, doc_name + ".txt")
|
||||
outpath = os.path.join(settings.INTERNET_DRAFT_PDF_PATH, doc_name + ".pdf")
|
||||
|
|
|
@ -0,0 +1,56 @@
|
|||
# Copyright The IETF Trust 2025, All Rights Reserved
|
||||
|
||||
from django.db import migrations, models
|
||||
import ietf.meeting.models
|
||||
import ietf.utils.fields
|
||||
import ietf.utils.storage
|
||||
import ietf.utils.validators
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
("meeting", "0009_session_meetecho_recording_name"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="floorplan",
|
||||
name="image",
|
||||
field=models.ImageField(
|
||||
blank=True,
|
||||
default=None,
|
||||
storage=ietf.utils.storage.BlobShadowFileSystemStorage(
|
||||
kind="", location=None
|
||||
),
|
||||
upload_to=ietf.meeting.models.floorplan_path,
|
||||
),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name="meetinghost",
|
||||
name="logo",
|
||||
field=ietf.utils.fields.MissingOkImageField(
|
||||
height_field="logo_height",
|
||||
storage=ietf.utils.storage.BlobShadowFileSystemStorage(
|
||||
kind="", location=None
|
||||
),
|
||||
upload_to=ietf.meeting.models._host_upload_path,
|
||||
validators=[
|
||||
ietf.utils.validators.MaxImageSizeValidator(400, 400),
|
||||
ietf.utils.validators.WrappedValidator(
|
||||
ietf.utils.validators.validate_file_size, True
|
||||
),
|
||||
ietf.utils.validators.WrappedValidator(
|
||||
ietf.utils.validators.validate_file_extension,
|
||||
[".png", ".jpg", ".jpeg"],
|
||||
),
|
||||
ietf.utils.validators.WrappedValidator(
|
||||
ietf.utils.validators.validate_mime_type,
|
||||
["image/jpeg", "image/png"],
|
||||
True,
|
||||
),
|
||||
],
|
||||
width_field="logo_width",
|
||||
),
|
||||
),
|
||||
]
|
|
@ -39,7 +39,7 @@ from ietf.name.models import (
|
|||
from ietf.person.models import Person
|
||||
from ietf.utils.decorators import memoize
|
||||
from ietf.utils.history import find_history_replacements_active_at, find_history_active_at
|
||||
from ietf.utils.storage import NoLocationMigrationFileSystemStorage
|
||||
from ietf.utils.storage import BlobShadowFileSystemStorage
|
||||
from ietf.utils.text import xslugify
|
||||
from ietf.utils.timezone import datetime_from_date, date_today
|
||||
from ietf.utils.models import ForeignKey
|
||||
|
@ -527,7 +527,12 @@ class FloorPlan(models.Model):
|
|||
modified= models.DateTimeField(auto_now=True)
|
||||
meeting = ForeignKey(Meeting)
|
||||
order = models.SmallIntegerField()
|
||||
image = models.ImageField(storage=NoLocationMigrationFileSystemStorage(), upload_to=floorplan_path, blank=True, default=None)
|
||||
image = models.ImageField(
|
||||
storage=BlobShadowFileSystemStorage(kind="floorplan"),
|
||||
upload_to=floorplan_path,
|
||||
blank=True,
|
||||
default=None,
|
||||
)
|
||||
#
|
||||
class Meta:
|
||||
ordering = ['-id',]
|
||||
|
@ -1431,8 +1436,12 @@ class MeetingHost(models.Model):
|
|||
"""Meeting sponsor"""
|
||||
meeting = ForeignKey(Meeting, related_name='meetinghosts')
|
||||
name = models.CharField(max_length=255, blank=False)
|
||||
# TODO-BLOBSTORE - capture these logos and look for other ImageField like model fields.
|
||||
logo = MissingOkImageField(
|
||||
storage=NoLocationMigrationFileSystemStorage(location=settings.MEETINGHOST_LOGO_PATH),
|
||||
storage=BlobShadowFileSystemStorage(
|
||||
kind="meetinghostlogo",
|
||||
location=settings.MEETINGHOST_LOGO_PATH,
|
||||
),
|
||||
upload_to=_host_upload_path,
|
||||
width_field='logo_width',
|
||||
height_field='logo_height',
|
||||
|
|
|
@ -38,6 +38,7 @@ from django.utils.text import slugify
|
|||
import debug # pyflakes:ignore
|
||||
|
||||
from ietf.doc.models import Document, NewRevisionDocEvent
|
||||
from ietf.doc.storage_utils import exists_in_storage, remove_from_storage, retrieve_bytes, retrieve_str
|
||||
from ietf.group.models import Group, Role, GroupFeatures
|
||||
from ietf.group.utils import can_manage_group
|
||||
from ietf.person.models import Person
|
||||
|
@ -55,6 +56,7 @@ from ietf.meeting.views import get_summary_by_area, get_summary_by_type, get_sum
|
|||
from ietf.name.models import SessionStatusName, ImportantDateName, RoleName, ProceedingsMaterialTypeName
|
||||
from ietf.utils.decorators import skip_coverage
|
||||
from ietf.utils.mail import outbox, empty_outbox, get_payload_text
|
||||
from ietf.utils.test_runner import TestBlobstoreManager
|
||||
from ietf.utils.test_utils import TestCase, login_testing_unauthorized, unicontent
|
||||
from ietf.utils.timezone import date_today, time_now
|
||||
|
||||
|
@ -112,7 +114,7 @@ class BaseMeetingTestCase(TestCase):
|
|||
# files will upload to the locations specified in settings.py.
|
||||
# Note that this will affect any use of the storage class in
|
||||
# meeting.models - i.e., FloorPlan.image and MeetingHost.logo
|
||||
self.patcher = patch('ietf.meeting.models.NoLocationMigrationFileSystemStorage.base_location',
|
||||
self.patcher = patch('ietf.meeting.models.BlobShadowFileSystemStorage.base_location',
|
||||
new_callable=PropertyMock)
|
||||
mocked = self.patcher.start()
|
||||
mocked.return_value = self.storage_dir
|
||||
|
@ -5228,6 +5230,7 @@ class InterimTests(TestCase):
|
|||
|
||||
def do_interim_request_single_virtual(self, emails_expected):
|
||||
make_meeting_test_data()
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
group = Group.objects.get(acronym='mars')
|
||||
date = date_today() + datetime.timedelta(days=30)
|
||||
time = time_now().replace(microsecond=0,second=0)
|
||||
|
@ -5278,6 +5281,12 @@ class InterimTests(TestCase):
|
|||
doc = session.materials.first()
|
||||
path = os.path.join(doc.get_file_path(),doc.filename_with_rev())
|
||||
self.assertTrue(os.path.exists(path))
|
||||
with Path(path).open() as f:
|
||||
self.assertEqual(f.read(), agenda)
|
||||
self.assertEqual(
|
||||
retrieve_str("agenda",doc.uploaded_filename),
|
||||
agenda
|
||||
)
|
||||
# check notices to secretariat and chairs
|
||||
self.assertEqual(len(outbox), length_before + emails_expected)
|
||||
return meeting
|
||||
|
@ -5299,6 +5308,7 @@ class InterimTests(TestCase):
|
|||
|
||||
def test_interim_request_single_in_person(self):
|
||||
make_meeting_test_data()
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
group = Group.objects.get(acronym='mars')
|
||||
date = date_today() + datetime.timedelta(days=30)
|
||||
time = time_now().replace(microsecond=0,second=0)
|
||||
|
@ -5345,6 +5355,10 @@ class InterimTests(TestCase):
|
|||
timeslot = session.official_timeslotassignment().timeslot
|
||||
self.assertEqual(timeslot.time,dt)
|
||||
self.assertEqual(timeslot.duration,duration)
|
||||
self.assertEqual(
|
||||
retrieve_str("agenda",session.agenda().uploaded_filename),
|
||||
agenda
|
||||
)
|
||||
|
||||
def test_interim_request_multi_day(self):
|
||||
make_meeting_test_data()
|
||||
|
@ -5412,6 +5426,11 @@ class InterimTests(TestCase):
|
|||
self.assertEqual(timeslot.time,dt2)
|
||||
self.assertEqual(timeslot.duration,duration)
|
||||
self.assertEqual(session.agenda_note,agenda_note)
|
||||
for session in meeting.session_set.all():
|
||||
self.assertEqual(
|
||||
retrieve_str("agenda",session.agenda().uploaded_filename),
|
||||
agenda
|
||||
)
|
||||
|
||||
def test_interim_request_multi_day_non_consecutive(self):
|
||||
make_meeting_test_data()
|
||||
|
@ -5474,6 +5493,7 @@ class InterimTests(TestCase):
|
|||
|
||||
def test_interim_request_series(self):
|
||||
make_meeting_test_data()
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
meeting_count_before = Meeting.objects.filter(type='interim').count()
|
||||
date = date_today() + datetime.timedelta(days=30)
|
||||
if (date.month, date.day) == (12, 31):
|
||||
|
@ -5561,6 +5581,11 @@ class InterimTests(TestCase):
|
|||
self.assertEqual(timeslot.time,dt2)
|
||||
self.assertEqual(timeslot.duration,duration)
|
||||
self.assertEqual(session.agenda_note,agenda_note)
|
||||
for session in meeting.session_set.all():
|
||||
self.assertEqual(
|
||||
retrieve_str("agenda",session.agenda().uploaded_filename),
|
||||
agenda
|
||||
)
|
||||
|
||||
|
||||
# test_interim_pending subsumed by test_appears_on_pending
|
||||
|
@ -6099,6 +6124,7 @@ class InterimTests(TestCase):
|
|||
def test_interim_request_edit_agenda_updates_doc(self):
|
||||
"""Updating the agenda through the request edit form should update the doc correctly"""
|
||||
make_interim_test_data()
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
meeting = add_event_info_to_session_qs(Session.objects.filter(meeting__type='interim', group__acronym='mars')).filter(current_status='sched').first().meeting
|
||||
group = meeting.session_set.first().group
|
||||
url = urlreverse('ietf.meeting.views.interim_request_edit', kwargs={'number': meeting.number})
|
||||
|
@ -6134,6 +6160,10 @@ class InterimTests(TestCase):
|
|||
self.assertNotEqual(agenda_doc.uploaded_filename, uploaded_filename_before, 'Uploaded filename should be updated')
|
||||
with (Path(agenda_doc.get_file_path()) / agenda_doc.uploaded_filename).open() as f:
|
||||
self.assertEqual(f.read(), 'modified agenda contents', 'New agenda contents should be saved')
|
||||
self.assertEqual(
|
||||
retrieve_str(agenda_doc.type_id, agenda_doc.uploaded_filename),
|
||||
"modified agenda contents"
|
||||
)
|
||||
|
||||
def test_interim_request_details_permissions(self):
|
||||
make_interim_test_data()
|
||||
|
@ -6354,12 +6384,14 @@ class MaterialsTests(TestCase):
|
|||
q = PyQuery(r.content)
|
||||
self.assertIn('Upload', str(q("title")))
|
||||
self.assertFalse(session.presentations.exists())
|
||||
test_file = StringIO('%PDF-1.4\n%âãÏÓ\nthis is some text for a test')
|
||||
test_content = '%PDF-1.4\n%âãÏÓ\nthis is some text for a test'
|
||||
test_file = StringIO(test_content)
|
||||
test_file.name = "not_really.pdf"
|
||||
r = self.client.post(url,dict(file=test_file))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
bs_doc = session.presentations.filter(document__type_id='bluesheets').first().document
|
||||
self.assertEqual(bs_doc.rev,'00')
|
||||
self.assertEqual(retrieve_str("bluesheets", f"{bs_doc.name}-{bs_doc.rev}.pdf"), test_content)
|
||||
r = self.client.get(url)
|
||||
self.assertEqual(r.status_code, 200)
|
||||
q = PyQuery(r.content)
|
||||
|
@ -6389,12 +6421,14 @@ class MaterialsTests(TestCase):
|
|||
q = PyQuery(r.content)
|
||||
self.assertIn('Upload', str(q("title")))
|
||||
self.assertFalse(session.presentations.exists())
|
||||
test_file = StringIO('%PDF-1.4\n%âãÏÓ\nthis is some text for a test')
|
||||
test_content = '%PDF-1.4\n%âãÏÓ\nthis is some text for a test'
|
||||
test_file = StringIO(test_content)
|
||||
test_file.name = "not_really.pdf"
|
||||
r = self.client.post(url,dict(file=test_file))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
bs_doc = session.presentations.filter(document__type_id='bluesheets').first().document
|
||||
self.assertEqual(bs_doc.rev,'00')
|
||||
self.assertEqual(retrieve_str("bluesheets", f"{bs_doc.name}-{bs_doc.rev}.pdf"), test_content)
|
||||
|
||||
def test_upload_bluesheets_interim_chair_access(self):
|
||||
make_meeting_test_data()
|
||||
|
@ -6467,27 +6501,36 @@ class MaterialsTests(TestCase):
|
|||
text = doc.text()
|
||||
self.assertIn('Some text', text)
|
||||
self.assertNotIn('<section>', text)
|
||||
|
||||
text = retrieve_str(doctype, f"{doc.name}-{doc.rev}.html")
|
||||
self.assertIn('Some text', text)
|
||||
self.assertNotIn('<section>', text)
|
||||
|
||||
# txt upload
|
||||
test_file = BytesIO(b'This is some text for a test, with the word\nvirtual at the beginning of a line.')
|
||||
test_bytes = b'This is some text for a test, with the word\nvirtual at the beginning of a line.'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = "some.txt"
|
||||
r = self.client.post(url,dict(submission_method="upload",file=test_file,apply_to_all=False))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
doc = session.presentations.filter(document__type_id=doctype).first().document
|
||||
self.assertEqual(doc.rev,'01')
|
||||
self.assertFalse(session2.presentations.filter(document__type_id=doctype))
|
||||
retrieved_bytes = retrieve_bytes(doctype, f"{doc.name}-{doc.rev}.txt")
|
||||
self.assertEqual(retrieved_bytes, test_bytes)
|
||||
|
||||
r = self.client.get(url)
|
||||
self.assertEqual(r.status_code, 200)
|
||||
q = PyQuery(r.content)
|
||||
self.assertIn('Revise', str(q("Title")))
|
||||
test_file = BytesIO(b'this is some different text for a test')
|
||||
test_bytes = b'this is some different text for a test'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = "also_some.txt"
|
||||
r = self.client.post(url,dict(submission_method="upload",file=test_file,apply_to_all=True))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
doc = Document.objects.get(pk=doc.pk)
|
||||
self.assertEqual(doc.rev,'02')
|
||||
self.assertTrue(session2.presentations.filter(document__type_id=doctype))
|
||||
retrieved_bytes = retrieve_bytes(doctype, f"{doc.name}-{doc.rev}.txt")
|
||||
self.assertEqual(retrieved_bytes, test_bytes)
|
||||
|
||||
# Test bad encoding
|
||||
test_file = BytesIO('<html><h1>Title</h1><section>Some\x93text</section></html>'.encode('latin1'))
|
||||
|
@ -6540,12 +6583,15 @@ class MaterialsTests(TestCase):
|
|||
q = PyQuery(r.content)
|
||||
self.assertIn('Upload', str(q("title")))
|
||||
self.assertFalse(session.presentations.filter(document__type_id=doctype))
|
||||
test_file = BytesIO(b'this is some text for a test')
|
||||
test_bytes = b'this is some text for a test'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = "not_really.txt"
|
||||
r = self.client.post(url,dict(submission_method="upload",file=test_file))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
doc = session.presentations.filter(document__type_id=doctype).first().document
|
||||
self.assertEqual(doc.rev,'00')
|
||||
retrieved_bytes = retrieve_bytes(doctype, f"{doc.name}-{doc.rev}.txt")
|
||||
self.assertEqual(retrieved_bytes, test_bytes)
|
||||
|
||||
# Verify that we don't have dead links
|
||||
url = urlreverse('ietf.meeting.views.session_details', kwargs={'num':session.meeting.number, 'acronym': session.group.acronym})
|
||||
|
@ -6567,12 +6613,15 @@ class MaterialsTests(TestCase):
|
|||
q = PyQuery(r.content)
|
||||
self.assertIn('Upload', str(q("title")))
|
||||
self.assertFalse(session.presentations.filter(document__type_id=doctype))
|
||||
test_file = BytesIO(b'this is some text for a test')
|
||||
test_bytes = b'this is some text for a test'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = "not_really.txt"
|
||||
r = self.client.post(url,dict(submission_method="upload",file=test_file))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
doc = session.presentations.filter(document__type_id=doctype).first().document
|
||||
self.assertEqual(doc.rev,'00')
|
||||
retrieved_bytes = retrieve_bytes(doctype, f"{doc.name}-{doc.rev}.txt")
|
||||
self.assertEqual(retrieved_bytes, test_bytes)
|
||||
|
||||
# Verify that we don't have dead links
|
||||
url = urlreverse('ietf.meeting.views.session_details', kwargs={'num':session.meeting.number, 'acronym': session.group.acronym})
|
||||
|
@ -6597,18 +6646,22 @@ class MaterialsTests(TestCase):
|
|||
self.assertRedirects(r, redirect_url)
|
||||
doc = session.presentations.filter(document__type_id='agenda').first().document
|
||||
self.assertEqual(doc.rev,'00')
|
||||
self.assertEqual(retrieve_str("agenda",f"{doc.name}-{doc.rev}.md"), test_text)
|
||||
|
||||
r = self.client.get(url)
|
||||
self.assertEqual(r.status_code, 200)
|
||||
q = PyQuery(r.content)
|
||||
self.assertIn('Revise', str(q("Title")))
|
||||
|
||||
test_file = BytesIO(b'Upload after enter')
|
||||
test_bytes = b'Upload after enter'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = "some.txt"
|
||||
r = self.client.post(url,dict(submission_method="upload",file=test_file))
|
||||
self.assertRedirects(r, redirect_url)
|
||||
doc = Document.objects.get(pk=doc.pk)
|
||||
self.assertEqual(doc.rev,'01')
|
||||
retrieved_bytes = retrieve_bytes("agenda", f"{doc.name}-{doc.rev}.txt")
|
||||
self.assertEqual(retrieved_bytes, test_bytes)
|
||||
|
||||
r = self.client.get(url)
|
||||
self.assertEqual(r.status_code, 200)
|
||||
|
@ -6620,6 +6673,8 @@ class MaterialsTests(TestCase):
|
|||
self.assertRedirects(r, redirect_url)
|
||||
doc = Document.objects.get(pk=doc.pk)
|
||||
self.assertEqual(doc.rev,'02')
|
||||
self.assertEqual(retrieve_str("agenda",f"{doc.name}-{doc.rev}.md"), test_text)
|
||||
|
||||
|
||||
@override_settings(MEETECHO_API_CONFIG="fake settings") # enough to trigger API calls
|
||||
@patch("ietf.meeting.views.SlidesManager")
|
||||
|
@ -6635,7 +6690,8 @@ class MaterialsTests(TestCase):
|
|||
q = PyQuery(r.content)
|
||||
self.assertIn('Upload', str(q("title")))
|
||||
self.assertFalse(session1.presentations.filter(document__type_id='slides'))
|
||||
test_file = BytesIO(b'this is not really a slide')
|
||||
test_bytes = b'this is not really a slide'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = 'not_really.txt'
|
||||
r = self.client.post(url,dict(file=test_file,title='a test slide file',apply_to_all=True,approved=True))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
|
@ -6647,6 +6703,7 @@ class MaterialsTests(TestCase):
|
|||
self.assertEqual(mock_slides_manager_cls.call_count, 1)
|
||||
self.assertEqual(mock_slides_manager_cls.call_args, call(api_config="fake settings"))
|
||||
self.assertEqual(mock_slides_manager_cls.return_value.add.call_count, 2)
|
||||
self.assertEqual(retrieve_bytes("slides", f"{sp.document.name}-{sp.document.rev}.txt"), test_bytes)
|
||||
# don't care which order they were called in, just that both sessions were updated
|
||||
self.assertCountEqual(
|
||||
mock_slides_manager_cls.return_value.add.call_args_list,
|
||||
|
@ -6658,7 +6715,8 @@ class MaterialsTests(TestCase):
|
|||
mock_slides_manager_cls.reset_mock()
|
||||
|
||||
url = urlreverse('ietf.meeting.views.upload_session_slides',kwargs={'num':session2.meeting.number,'session_id':session2.id})
|
||||
test_file = BytesIO(b'some other thing still not slidelike')
|
||||
test_bytes = b'some other thing still not slidelike'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = 'also_not_really.txt'
|
||||
r = self.client.post(url,dict(file=test_file,title='a different slide file',apply_to_all=False,approved=True))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
|
@ -6671,6 +6729,7 @@ class MaterialsTests(TestCase):
|
|||
self.assertEqual(mock_slides_manager_cls.call_count, 1)
|
||||
self.assertEqual(mock_slides_manager_cls.call_args, call(api_config="fake settings"))
|
||||
self.assertEqual(mock_slides_manager_cls.return_value.add.call_count, 1)
|
||||
self.assertEqual(retrieve_bytes("slides", f"{sp.document.name}-{sp.document.rev}.txt"), test_bytes)
|
||||
self.assertEqual(
|
||||
mock_slides_manager_cls.return_value.add.call_args,
|
||||
call(session=session2, slides=sp.document, order=2),
|
||||
|
@ -6682,7 +6741,8 @@ class MaterialsTests(TestCase):
|
|||
self.assertTrue(r.status_code, 200)
|
||||
q = PyQuery(r.content)
|
||||
self.assertIn('Revise', str(q("title")))
|
||||
test_file = BytesIO(b'new content for the second slide deck')
|
||||
test_bytes = b'new content for the second slide deck'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = 'doesnotmatter.txt'
|
||||
r = self.client.post(url,dict(file=test_file,title='rename the presentation',apply_to_all=False, approved=True))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
|
@ -6692,6 +6752,7 @@ class MaterialsTests(TestCase):
|
|||
self.assertEqual(replacement_sp.rev,'01')
|
||||
self.assertEqual(replacement_sp.document.rev,'01')
|
||||
self.assertEqual(mock_slides_manager_cls.call_count, 1)
|
||||
self.assertEqual(retrieve_bytes("slides", f"{replacement_sp.document.name}-{replacement_sp.document.rev}.txt"), test_bytes)
|
||||
self.assertEqual(mock_slides_manager_cls.call_args, call(api_config="fake settings"))
|
||||
self.assertEqual(mock_slides_manager_cls.return_value.revise.call_count, 1)
|
||||
self.assertEqual(
|
||||
|
@ -6771,7 +6832,6 @@ class MaterialsTests(TestCase):
|
|||
self.assertEqual(2, agenda.docevent_set.count())
|
||||
self.assertFalse(mock_slides_manager_cls.called)
|
||||
|
||||
|
||||
def test_propose_session_slides(self):
|
||||
for type_id in ['ietf','interim']:
|
||||
session = SessionFactory(meeting__type_id=type_id)
|
||||
|
@ -6798,7 +6858,8 @@ class MaterialsTests(TestCase):
|
|||
login_testing_unauthorized(self,newperson.user.username,upload_url)
|
||||
r = self.client.get(upload_url)
|
||||
self.assertEqual(r.status_code,200)
|
||||
test_file = BytesIO(b'this is not really a slide')
|
||||
test_bytes = b'this is not really a slide'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = 'not_really.txt'
|
||||
empty_outbox()
|
||||
r = self.client.post(upload_url,dict(file=test_file,title='a test slide file',apply_to_all=True,approved=False))
|
||||
|
@ -6806,6 +6867,10 @@ class MaterialsTests(TestCase):
|
|||
session = Session.objects.get(pk=session.pk)
|
||||
self.assertEqual(session.slidesubmission_set.count(),1)
|
||||
self.assertEqual(len(outbox),1)
|
||||
self.assertEqual(
|
||||
retrieve_bytes("staging", session.slidesubmission_set.get().filename),
|
||||
test_bytes
|
||||
)
|
||||
|
||||
r = self.client.get(session_overview_url)
|
||||
self.assertEqual(r.status_code, 200)
|
||||
|
@ -6825,13 +6890,20 @@ class MaterialsTests(TestCase):
|
|||
login_testing_unauthorized(self,chair.user.username,upload_url)
|
||||
r = self.client.get(upload_url)
|
||||
self.assertEqual(r.status_code,200)
|
||||
test_file = BytesIO(b'this is not really a slide either')
|
||||
test_bytes = b'this is not really a slide either'
|
||||
test_file = BytesIO(test_bytes)
|
||||
test_file.name = 'again_not_really.txt'
|
||||
empty_outbox()
|
||||
r = self.client.post(upload_url,dict(file=test_file,title='a selfapproved test slide file',apply_to_all=True,approved=True))
|
||||
self.assertEqual(r.status_code, 302)
|
||||
self.assertEqual(len(outbox),0)
|
||||
self.assertEqual(session.slidesubmission_set.count(),2)
|
||||
sp = session.presentations.get(document__title__contains="selfapproved")
|
||||
self.assertFalse(exists_in_storage("staging", sp.document.uploaded_filename))
|
||||
self.assertEqual(
|
||||
retrieve_bytes("slides", sp.document.uploaded_filename),
|
||||
test_bytes
|
||||
)
|
||||
self.client.logout()
|
||||
|
||||
self.client.login(username=chair.user.username, password=chair.user.username+"+password")
|
||||
|
@ -6854,6 +6926,8 @@ class MaterialsTests(TestCase):
|
|||
self.assertEqual(r.status_code,302)
|
||||
self.assertEqual(SlideSubmission.objects.filter(status__slug = 'rejected').count(), 1)
|
||||
self.assertEqual(SlideSubmission.objects.filter(status__slug = 'pending').count(), 0)
|
||||
if submission.filename is not None and submission.filename != "":
|
||||
self.assertFalse(exists_in_storage("staging", submission.filename))
|
||||
r = self.client.get(url)
|
||||
self.assertEqual(r.status_code, 200)
|
||||
self.assertRegex(r.content.decode(), r"These\s+slides\s+have\s+already\s+been\s+rejected")
|
||||
|
@ -6872,6 +6946,7 @@ class MaterialsTests(TestCase):
|
|||
r = self.client.get(url)
|
||||
self.assertEqual(r.status_code,200)
|
||||
empty_outbox()
|
||||
self.assertTrue(exists_in_storage("staging", submission.filename))
|
||||
r = self.client.post(url,dict(title='different title',approve='approve'))
|
||||
self.assertEqual(r.status_code,302)
|
||||
self.assertEqual(SlideSubmission.objects.filter(status__slug = 'pending').count(), 0)
|
||||
|
@ -6881,6 +6956,8 @@ class MaterialsTests(TestCase):
|
|||
self.assertIsNotNone(submission.doc)
|
||||
self.assertEqual(session.presentations.count(),1)
|
||||
self.assertEqual(session.presentations.first().document.title,'different title')
|
||||
self.assertTrue(exists_in_storage("slides", submission.doc.uploaded_filename))
|
||||
self.assertFalse(exists_in_storage("staging", submission.filename))
|
||||
self.assertEqual(mock_slides_manager_cls.call_count, 1)
|
||||
self.assertEqual(mock_slides_manager_cls.call_args, call(api_config="fake settings"))
|
||||
self.assertEqual(mock_slides_manager_cls.return_value.add.call_count, 1)
|
||||
|
@ -6900,6 +6977,7 @@ class MaterialsTests(TestCase):
|
|||
@override_settings(MEETECHO_API_CONFIG="fake settings") # enough to trigger API calls
|
||||
@patch("ietf.meeting.views.SlidesManager")
|
||||
def test_approve_proposed_slides_multisession_apply_one(self, mock_slides_manager_cls):
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SlideSubmissionFactory(session__meeting__type_id='ietf')
|
||||
session1 = submission.session
|
||||
session2 = SessionFactory(group=submission.session.group, meeting=submission.session.meeting)
|
||||
|
@ -6928,6 +7006,7 @@ class MaterialsTests(TestCase):
|
|||
@override_settings(MEETECHO_API_CONFIG="fake settings") # enough to trigger API calls
|
||||
@patch("ietf.meeting.views.SlidesManager")
|
||||
def test_approve_proposed_slides_multisession_apply_all(self, mock_slides_manager_cls):
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SlideSubmissionFactory(session__meeting__type_id='ietf')
|
||||
session1 = submission.session
|
||||
session2 = SessionFactory(group=submission.session.group, meeting=submission.session.meeting)
|
||||
|
@ -6972,12 +7051,15 @@ class MaterialsTests(TestCase):
|
|||
|
||||
submission = SlideSubmission.objects.get(session=session)
|
||||
|
||||
self.assertTrue(exists_in_storage("staging", submission.filename))
|
||||
approve_url = urlreverse('ietf.meeting.views.approve_proposed_slides', kwargs={'slidesubmission_id':submission.pk,'num':submission.session.meeting.number})
|
||||
login_testing_unauthorized(self, chair.user.username, approve_url)
|
||||
r = self.client.post(approve_url,dict(title=submission.title,approve='approve'))
|
||||
submission.refresh_from_db()
|
||||
self.assertEqual(r.status_code,302)
|
||||
self.client.logout()
|
||||
self.assertFalse(exists_in_storage("staging", submission.filename))
|
||||
self.assertTrue(exists_in_storage("slides", submission.doc.uploaded_filename))
|
||||
self.assertEqual(mock_slides_manager_cls.call_count, 1)
|
||||
self.assertEqual(mock_slides_manager_cls.call_args, call(api_config="fake settings"))
|
||||
self.assertEqual(mock_slides_manager_cls.return_value.add.call_count, 1)
|
||||
|
@ -7003,11 +7085,16 @@ class MaterialsTests(TestCase):
|
|||
|
||||
(first_submission, second_submission) = SlideSubmission.objects.filter(session=session, status__slug = 'pending').order_by('id')
|
||||
|
||||
self.assertTrue(exists_in_storage("staging", first_submission.filename))
|
||||
self.assertTrue(exists_in_storage("staging", second_submission.filename))
|
||||
approve_url = urlreverse('ietf.meeting.views.approve_proposed_slides', kwargs={'slidesubmission_id':second_submission.pk,'num':second_submission.session.meeting.number})
|
||||
login_testing_unauthorized(self, chair.user.username, approve_url)
|
||||
r = self.client.post(approve_url,dict(title=submission.title,approve='approve'))
|
||||
first_submission.refresh_from_db()
|
||||
second_submission.refresh_from_db()
|
||||
self.assertTrue(exists_in_storage("staging", first_submission.filename))
|
||||
self.assertFalse(exists_in_storage("staging", second_submission.filename))
|
||||
self.assertTrue(exists_in_storage("slides", second_submission.doc.uploaded_filename))
|
||||
self.assertEqual(r.status_code,302)
|
||||
self.assertEqual(mock_slides_manager_cls.call_count, 1)
|
||||
self.assertEqual(mock_slides_manager_cls.call_args, call(api_config="fake settings"))
|
||||
|
@ -7024,6 +7111,7 @@ class MaterialsTests(TestCase):
|
|||
self.assertEqual(r.status_code,302)
|
||||
self.client.logout()
|
||||
self.assertFalse(mock_slides_manager_cls.called)
|
||||
self.assertFalse(exists_in_storage("staging", first_submission.filename))
|
||||
|
||||
self.assertEqual(SlideSubmission.objects.filter(status__slug = 'pending').count(),0)
|
||||
self.assertEqual(SlideSubmission.objects.filter(status__slug = 'rejected').count(),1)
|
||||
|
@ -7114,6 +7202,10 @@ class ImportNotesTests(TestCase):
|
|||
minutes_path = Path(self.meeting.get_materials_path()) / 'minutes'
|
||||
with (minutes_path / self.session.minutes().uploaded_filename).open() as f:
|
||||
self.assertEqual(f.read(), 'original markdown text')
|
||||
self.assertEqual(
|
||||
retrieve_str("minutes", self.session.minutes().uploaded_filename),
|
||||
'original markdown text'
|
||||
)
|
||||
|
||||
def test_refuses_identical_import(self):
|
||||
"""Should not be able to import text identical to the current revision"""
|
||||
|
@ -7173,7 +7265,9 @@ class ImportNotesTests(TestCase):
|
|||
# remove the file uploaded for the first rev
|
||||
minutes_docs = self.session.presentations.filter(document__type='minutes')
|
||||
self.assertEqual(minutes_docs.count(), 1)
|
||||
Path(minutes_docs.first().document.get_file_name()).unlink()
|
||||
to_remove = Path(minutes_docs.first().document.get_file_name())
|
||||
to_remove.unlink()
|
||||
remove_from_storage("minutes", to_remove.name)
|
||||
|
||||
self.assertEqual(r.status_code, 302)
|
||||
with requests_mock.Mocker() as mock:
|
||||
|
|
|
@ -24,6 +24,7 @@ from django.utils.encoding import smart_str
|
|||
import debug # pyflakes:ignore
|
||||
|
||||
from ietf.dbtemplate.models import DBTemplate
|
||||
from ietf.doc.storage_utils import store_bytes, store_str
|
||||
from ietf.meeting.models import (Session, SchedulingEvent, TimeSlot,
|
||||
Constraint, SchedTimeSessAssignment, SessionPresentation, Attended)
|
||||
from ietf.doc.models import Document, State, NewRevisionDocEvent, StateDocEvent
|
||||
|
@ -772,7 +773,12 @@ def handle_upload_file(file, filename, meeting, subdir, request=None, encoding=N
|
|||
# Whole file sanitization; add back what's missing from a complete
|
||||
# document (sanitize will remove these).
|
||||
clean = clean_html(text)
|
||||
destination.write(clean.encode("utf8"))
|
||||
clean_bytes = clean.encode('utf8')
|
||||
destination.write(clean_bytes)
|
||||
# Assumes contents of subdir are always document type ids
|
||||
# TODO-BLOBSTORE: see if we can refactor this so that the connection to the document isn't lost
|
||||
# In the meantime, consider faking it by parsing filename (shudder).
|
||||
store_bytes(subdir, filename.name, clean_bytes)
|
||||
if request and clean != text:
|
||||
messages.warning(request,
|
||||
(
|
||||
|
@ -783,6 +789,11 @@ def handle_upload_file(file, filename, meeting, subdir, request=None, encoding=N
|
|||
else:
|
||||
for chunk in chunks:
|
||||
destination.write(chunk)
|
||||
file.seek(0)
|
||||
if hasattr(file, "chunks"):
|
||||
chunks = file.chunks()
|
||||
# TODO-BLOBSTORE: See above question about refactoring
|
||||
store_bytes(subdir, filename.name, b"".join(chunks))
|
||||
|
||||
return None
|
||||
|
||||
|
@ -809,13 +820,15 @@ def new_doc_for_session(type_id, session):
|
|||
session.presentations.create(document=doc,rev='00')
|
||||
return doc
|
||||
|
||||
# TODO-BLOBSTORE - consider adding doc to this signature and factoring away type_id
|
||||
def write_doc_for_session(session, type_id, filename, contents):
|
||||
filename = Path(filename)
|
||||
path = Path(session.meeting.get_materials_path()) / type_id
|
||||
path.mkdir(parents=True, exist_ok=True)
|
||||
with open(path / filename, "wb") as file:
|
||||
file.write(contents.encode('utf-8'))
|
||||
return
|
||||
store_str(type_id, filename.name, contents)
|
||||
return None
|
||||
|
||||
def create_recording(session, url, title=None, user=None):
|
||||
'''
|
||||
|
|
|
@ -52,6 +52,7 @@ import debug # pyflakes:ignore
|
|||
|
||||
from ietf.doc.fields import SearchableDocumentsField
|
||||
from ietf.doc.models import Document, State, DocEvent, NewRevisionDocEvent
|
||||
from ietf.doc.storage_utils import remove_from_storage, retrieve_bytes, store_file
|
||||
from ietf.group.models import Group
|
||||
from ietf.group.utils import can_manage_session_materials, can_manage_some_groups, can_manage_group
|
||||
from ietf.person.models import Person, User
|
||||
|
@ -3091,6 +3092,8 @@ def upload_session_slides(request, session_id, num, name=None):
|
|||
for chunk in file.chunks():
|
||||
destination.write(chunk)
|
||||
destination.close()
|
||||
file.seek(0)
|
||||
store_file("staging", filename, file)
|
||||
|
||||
submission.filename = filename
|
||||
submission.save()
|
||||
|
@ -4645,7 +4648,6 @@ def api_upload_bluesheet(request):
|
|||
save_err = save_bluesheet(request, session, file)
|
||||
if save_err:
|
||||
return err(400, save_err)
|
||||
|
||||
return HttpResponse("Done", status=200, content_type='text/plain')
|
||||
|
||||
|
||||
|
@ -4957,6 +4959,8 @@ def approve_proposed_slides(request, slidesubmission_id, num):
|
|||
if not os.path.exists(path):
|
||||
os.makedirs(path)
|
||||
shutil.move(submission.staged_filepath(), os.path.join(path, target_filename))
|
||||
doc.store_bytes(target_filename, retrieve_bytes("staging", submission.filename))
|
||||
remove_from_storage("staging", submission.filename)
|
||||
post_process(doc)
|
||||
DocEvent.objects.create(type="approved_slides", doc=doc, rev=doc.rev, by=request.user.person, desc="Slides approved")
|
||||
|
||||
|
@ -4994,11 +4998,14 @@ def approve_proposed_slides(request, slidesubmission_id, num):
|
|||
# in a SlideSubmission object without a file. Handle
|
||||
# this case and keep processing the 'disapprove' even if
|
||||
# the filename doesn't exist.
|
||||
try:
|
||||
if submission.filename != None and submission.filename != '':
|
||||
|
||||
if submission.filename != None and submission.filename != '':
|
||||
try:
|
||||
os.unlink(submission.staged_filepath())
|
||||
except (FileNotFoundError, IsADirectoryError):
|
||||
pass
|
||||
except (FileNotFoundError, IsADirectoryError):
|
||||
pass
|
||||
remove_from_storage("staging", submission.filename)
|
||||
|
||||
acronym = submission.session.group.acronym
|
||||
submission.status = SlideSubmissionStatusName.objects.get(slug='rejected')
|
||||
submission.save()
|
||||
|
|
|
@ -42,6 +42,7 @@ class ReminderDates(models.Model):
|
|||
|
||||
|
||||
class NomCom(models.Model):
|
||||
# TODO-BLOBSTORE: migrate this to a database field instead of a FileField and update code accordingly
|
||||
public_key = models.FileField(storage=NoLocationMigrationFileSystemStorage(location=settings.NOMCOM_PUBLIC_KEYS_DIR),
|
||||
upload_to=upload_path_handler, blank=True, null=True)
|
||||
|
||||
|
|
|
@ -0,0 +1,38 @@
|
|||
# Copyright The IETF Trust 2025, All Rights Reserved
|
||||
|
||||
from django.db import migrations, models
|
||||
import ietf.utils.storage
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
("person", "0003_alter_personalapikey_endpoint"),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name="person",
|
||||
name="photo",
|
||||
field=models.ImageField(
|
||||
blank=True,
|
||||
default=None,
|
||||
storage=ietf.utils.storage.BlobShadowFileSystemStorage(
|
||||
kind="", location=None
|
||||
),
|
||||
upload_to="photo",
|
||||
),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name="person",
|
||||
name="photo_thumb",
|
||||
field=models.ImageField(
|
||||
blank=True,
|
||||
default=None,
|
||||
storage=ietf.utils.storage.BlobShadowFileSystemStorage(
|
||||
kind="", location=None
|
||||
),
|
||||
upload_to="photo",
|
||||
),
|
||||
),
|
||||
]
|
|
@ -29,7 +29,7 @@ import debug # pyflakes:ignore
|
|||
from ietf.name.models import ExtResourceName
|
||||
from ietf.person.name import name_parts, initials, plain_name
|
||||
from ietf.utils.mail import send_mail_preformatted
|
||||
from ietf.utils.storage import NoLocationMigrationFileSystemStorage
|
||||
from ietf.utils.storage import BlobShadowFileSystemStorage
|
||||
from ietf.utils.mail import formataddr
|
||||
from ietf.person.name import unidecode_name
|
||||
from ietf.utils import log
|
||||
|
@ -60,8 +60,18 @@ class Person(models.Model):
|
|||
pronouns_selectable = jsonfield.JSONCharField("Pronouns", max_length=120, blank=True, null=True, default=list )
|
||||
pronouns_freetext = models.CharField(" ", max_length=30, null=True, blank=True, help_text="Optionally provide your personal pronouns. These will be displayed on your public profile page and alongside your name in Meetecho and, in future, other systems. Select any number of the checkboxes OR provide a custom string up to 30 characters.")
|
||||
biography = models.TextField(blank=True, help_text="Short biography for use on leadership pages. Use plain text or reStructuredText markup.")
|
||||
photo = models.ImageField(storage=NoLocationMigrationFileSystemStorage(), upload_to=settings.PHOTOS_DIRNAME, blank=True, default=None)
|
||||
photo_thumb = models.ImageField(storage=NoLocationMigrationFileSystemStorage(), upload_to=settings.PHOTOS_DIRNAME, blank=True, default=None)
|
||||
photo = models.ImageField(
|
||||
storage=BlobShadowFileSystemStorage(kind="photo"),
|
||||
upload_to=settings.PHOTOS_DIRNAME,
|
||||
blank=True,
|
||||
default=None,
|
||||
)
|
||||
photo_thumb = models.ImageField(
|
||||
storage=BlobShadowFileSystemStorage(kind="photo"),
|
||||
upload_to=settings.PHOTOS_DIRNAME,
|
||||
blank=True,
|
||||
default=None,
|
||||
)
|
||||
name_from_draft = models.CharField("Full Name (from submission)", null=True, max_length=255, editable=False, help_text="Name as found in an Internet-Draft submission.")
|
||||
|
||||
def __str__(self):
|
||||
|
|
|
@ -183,6 +183,12 @@ STATIC_IETF_ORG = "https://static.ietf.org"
|
|||
# Server-side static.ietf.org URL (used in pdfized)
|
||||
STATIC_IETF_ORG_INTERNAL = STATIC_IETF_ORG
|
||||
|
||||
ENABLE_BLOBSTORAGE = True
|
||||
|
||||
BLOBSTORAGE_MAX_ATTEMPTS = 1
|
||||
BLOBSTORAGE_CONNECT_TIMEOUT = 2
|
||||
BLOBSTORAGE_READ_TIMEOUT = 2
|
||||
|
||||
WSGI_APPLICATION = "ietf.wsgi.application"
|
||||
|
||||
AUTHENTICATION_BACKENDS = ( 'ietf.ietfauth.backends.CaseInsensitiveModelBackend', )
|
||||
|
@ -736,6 +742,38 @@ URL_REGEXPS = {
|
|||
"schedule_name": r"(?P<name>[A-Za-z0-9-:_]+)",
|
||||
}
|
||||
|
||||
STORAGES: dict[str, Any] = {
|
||||
"default": {"BACKEND": "django.core.files.storage.FileSystemStorage"},
|
||||
"staticfiles": {"BACKEND": "django.contrib.staticfiles.storage.StaticFilesStorage"},
|
||||
}
|
||||
|
||||
# settings_local will need to configure storages for these names
|
||||
MORE_STORAGE_NAMES: list[str] = [
|
||||
"bofreq",
|
||||
"charter",
|
||||
"conflrev",
|
||||
"active-draft",
|
||||
"draft",
|
||||
"slides",
|
||||
"minutes",
|
||||
"agenda",
|
||||
"bluesheets",
|
||||
"procmaterials",
|
||||
"narrativeminutes",
|
||||
"statement",
|
||||
"statchg",
|
||||
"liai-att",
|
||||
"chatlog",
|
||||
"polls",
|
||||
"staging",
|
||||
"bibxml-ids",
|
||||
"indexes",
|
||||
"floorplan",
|
||||
"meetinghostlogo",
|
||||
"photo",
|
||||
"review",
|
||||
]
|
||||
|
||||
# Override this in settings_local.py if needed
|
||||
# *_PATH variables ends with a slash/ .
|
||||
|
||||
|
|
|
@ -14,7 +14,8 @@ import os
|
|||
import shutil
|
||||
import tempfile
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import TEST_CODE_COVERAGE_CHECKER
|
||||
from ietf.settings import STORAGES, TEST_CODE_COVERAGE_CHECKER, MORE_STORAGE_NAMES, BLOBSTORAGE_CONNECT_TIMEOUT, BLOBSTORAGE_READ_TIMEOUT, BLOBSTORAGE_MAX_ATTEMPTS
|
||||
import botocore.config
|
||||
import debug # pyflakes:ignore
|
||||
debug.debug = True
|
||||
|
||||
|
@ -105,3 +106,30 @@ LOGGING["loggers"] = { # pyflakes:ignore
|
|||
'level': 'INFO',
|
||||
},
|
||||
}
|
||||
|
||||
# Configure storages for the blob store - use env settings if present. See the --no-manage-blobstore test option.
|
||||
_blob_store_endpoint_url = os.environ.get("DATATRACKER_BLOB_STORE_ENDPOINT_URL", "http://blobstore:9000")
|
||||
_blob_store_access_key = os.environ.get("DATATRACKER_BLOB_STORE_ACCESS_KEY", "minio_root")
|
||||
_blob_store_secret_key = os.environ.get("DATATRACKER_BLOB_STORE_SECRET_KEY", "minio_pass")
|
||||
_blob_store_bucket_prefix = os.environ.get("DATATRACKER_BLOB_STORE_BUCKET_PREFIX", "test-")
|
||||
_blob_store_enable_profiling = (
|
||||
os.environ.get("DATATRACKER_BLOB_STORE_ENABLE_PROFILING", "false").lower() == "true"
|
||||
)
|
||||
for storagename in MORE_STORAGE_NAMES:
|
||||
STORAGES[storagename] = {
|
||||
"BACKEND": "ietf.doc.storage_backends.CustomS3Storage",
|
||||
"OPTIONS": dict(
|
||||
endpoint_url=_blob_store_endpoint_url,
|
||||
access_key=_blob_store_access_key,
|
||||
secret_key=_blob_store_secret_key,
|
||||
security_token=None,
|
||||
client_config=botocore.config.Config(
|
||||
signature_version="s3v4",
|
||||
connect_timeout=BLOBSTORAGE_CONNECT_TIMEOUT,
|
||||
read_timeout=BLOBSTORAGE_READ_TIMEOUT,
|
||||
retries={"total_max_attempts": BLOBSTORAGE_MAX_ATTEMPTS},
|
||||
),
|
||||
bucket_name=f"{_blob_store_bucket_prefix}{storagename}",
|
||||
ietf_log_blob_timing=_blob_store_enable_profiling,
|
||||
),
|
||||
}
|
||||
|
|
|
@ -31,6 +31,7 @@ from ietf.doc.factories import (DocumentFactory, WgDraftFactory, IndividualDraft
|
|||
ReviewFactory, WgRfcFactory)
|
||||
from ietf.doc.models import ( Document, DocEvent, State,
|
||||
BallotPositionDocEvent, DocumentAuthor, SubmissionDocEvent )
|
||||
from ietf.doc.storage_utils import exists_in_storage, retrieve_str, store_str
|
||||
from ietf.doc.utils import create_ballot_if_not_open, can_edit_docextresources, update_action_holders
|
||||
from ietf.group.factories import GroupFactory, RoleFactory
|
||||
from ietf.group.models import Group
|
||||
|
@ -53,6 +54,7 @@ from ietf.submit.utils import (expirable_submissions, expire_submission, find_su
|
|||
from ietf.utils import tool_version
|
||||
from ietf.utils.accesstoken import generate_access_token
|
||||
from ietf.utils.mail import outbox, get_payload_text
|
||||
from ietf.utils.test_runner import TestBlobstoreManager
|
||||
from ietf.utils.test_utils import login_testing_unauthorized, TestCase
|
||||
from ietf.utils.timezone import date_today
|
||||
from ietf.utils.draft import PlaintextDraft
|
||||
|
@ -355,6 +357,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
def submit_new_wg(self, formats):
|
||||
# submit new -> supply submitter info -> approve
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
GroupFactory(type_id='wg',acronym='ames')
|
||||
mars = GroupFactory(type_id='wg', acronym='mars')
|
||||
RoleFactory(name_id='chair', group=mars, person__user__username='marschairman')
|
||||
|
@ -428,6 +431,13 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
self.assertTrue(draft.latest_event(type="added_suggested_replaces"))
|
||||
self.assertTrue(not os.path.exists(os.path.join(self.staging_dir, "%s-%s.txt" % (name, rev))))
|
||||
self.assertTrue(os.path.exists(os.path.join(self.repository_dir, "%s-%s.txt" % (name, rev))))
|
||||
check_ext = ["xml", "txt", "html"] if "xml" in formats else ["txt"]
|
||||
for ext in check_ext:
|
||||
basename=f"{name}-{rev}.{ext}"
|
||||
extname=f"{ext}/{basename}"
|
||||
self.assertFalse(exists_in_storage("staging", basename))
|
||||
self.assertTrue(exists_in_storage("active-draft", extname))
|
||||
self.assertTrue(exists_in_storage("draft", extname))
|
||||
self.assertEqual(draft.type_id, "draft")
|
||||
self.assertEqual(draft.stream_id, "ietf")
|
||||
self.assertTrue(draft.expires >= timezone.now() + datetime.timedelta(days=settings.INTERNET_DRAFT_DAYS_TO_EXPIRE - 1))
|
||||
|
@ -535,6 +545,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
def submit_new_concluded_wg_as_author(self, group_state_id='conclude'):
|
||||
"""A new concluded WG submission by a logged-in author needs AD approval"""
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
mars = GroupFactory(type_id='wg', acronym='mars', state_id=group_state_id)
|
||||
draft = WgDraftFactory(group=mars)
|
||||
setup_default_community_list_for_group(draft.group)
|
||||
|
@ -580,6 +591,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
def submit_existing(self, formats, change_authors=True, group_type='wg', stream_type='ietf'):
|
||||
# submit new revision of existing -> supply submitter info -> prev authors confirm
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
|
||||
def _assert_authors_are_action_holders(draft, expect=True):
|
||||
for author in draft.authors():
|
||||
|
@ -771,6 +783,13 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
self.assertTrue(os.path.exists(os.path.join(self.archive_dir, "%s-%s.txt" % (name, old_rev))))
|
||||
self.assertTrue(not os.path.exists(os.path.join(self.staging_dir, "%s-%s.txt" % (name, rev))))
|
||||
self.assertTrue(os.path.exists(os.path.join(self.repository_dir, "%s-%s.txt" % (name, rev))))
|
||||
check_ext = ["xml", "txt", "html"] if "xml" in formats else ["txt"]
|
||||
for ext in check_ext:
|
||||
basename=f"{name}-{rev}.{ext}"
|
||||
extname=f"{ext}/{basename}"
|
||||
self.assertFalse(exists_in_storage("staging", basename))
|
||||
self.assertTrue(exists_in_storage("active-draft", extname))
|
||||
self.assertTrue(exists_in_storage("draft", extname))
|
||||
self.assertEqual(draft.type_id, "draft")
|
||||
if stream_type == 'ietf':
|
||||
self.assertEqual(draft.stream_id, "ietf")
|
||||
|
@ -909,6 +928,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
def submit_new_individual(self, formats):
|
||||
# submit new -> supply submitter info -> confirm
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
|
||||
name = "draft-authorname-testing-tests"
|
||||
rev = "00"
|
||||
|
@ -971,7 +991,13 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
self.assertTrue(variant_path.samefile(variant_ftp_path))
|
||||
variant_all_archive_path = Path(settings.INTERNET_ALL_DRAFTS_ARCHIVE_DIR) / variant_path.name
|
||||
self.assertTrue(variant_path.samefile(variant_all_archive_path))
|
||||
|
||||
check_ext = ["xml", "txt", "html"] if "xml" in formats else ["txt"]
|
||||
for ext in check_ext:
|
||||
basename=f"{name}-{rev}.{ext}"
|
||||
extname=f"{ext}/{basename}"
|
||||
self.assertFalse(exists_in_storage("staging", basename))
|
||||
self.assertTrue(exists_in_storage("active-draft", extname))
|
||||
self.assertTrue(exists_in_storage("draft", extname))
|
||||
|
||||
|
||||
def test_submit_new_individual_txt(self):
|
||||
|
@ -988,6 +1014,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
self.submit_new_individual(["txt", "xml"])
|
||||
|
||||
def submit_new_draft_no_org_or_address(self, formats):
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
name = 'draft-testing-no-org-or-address'
|
||||
|
||||
author = PersonFactory()
|
||||
|
@ -1078,6 +1105,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
self.assertIsNone(event, 'External resource change event was unexpectedly created')
|
||||
|
||||
def submit_new_draft_with_extresources(self, group):
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
name = 'draft-testing-with-extresources'
|
||||
|
||||
status_url, author = self.do_submission(name, rev='00', group=group)
|
||||
|
@ -1107,6 +1135,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
def submit_new_individual_logged_in(self, formats):
|
||||
# submit new -> supply submitter info -> done
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
|
||||
name = "draft-authorname-testing-logged-in"
|
||||
rev = "00"
|
||||
|
@ -1250,6 +1279,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
Unlike some other tests in this module, does not confirm draft if this would be required.
|
||||
"""
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
orig_draft: Document = DocumentFactory( # type: ignore[annotation-unchecked]
|
||||
type_id='draft',
|
||||
group=GroupFactory(type_id=group_type) if group_type else None,
|
||||
|
@ -1290,6 +1320,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
def submit_new_individual_replacing_wg(self, logged_in=False, group_state_id='active', notify_ad=False):
|
||||
"""Chair of an active WG should be notified if individual draft is proposed to replace a WG draft"""
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
name = "draft-authorname-testing-tests"
|
||||
rev = "00"
|
||||
group = None
|
||||
|
@ -1416,6 +1447,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
# cancel
|
||||
r = self.client.post(status_url, dict(action=action))
|
||||
self.assertTrue(not os.path.exists(os.path.join(self.staging_dir, "%s-%s.txt" % (name, rev))))
|
||||
self.assertFalse(exists_in_storage("staging",f"{name}-{rev}.txt"))
|
||||
|
||||
def test_edit_submission_and_force_post(self):
|
||||
# submit -> edit
|
||||
|
@ -1605,16 +1637,21 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
self.assertEqual(Submission.objects.filter(name=name).count(), 1)
|
||||
|
||||
self.assertTrue(os.path.exists(os.path.join(self.staging_dir, "%s-%s.txt" % (name, rev))))
|
||||
self.assertTrue(exists_in_storage("staging",f"{name}-{rev}.txt"))
|
||||
fd = io.open(os.path.join(self.staging_dir, "%s-%s.txt" % (name, rev)))
|
||||
txt_contents = fd.read()
|
||||
fd.close()
|
||||
self.assertTrue(name in txt_contents)
|
||||
self.assertTrue(os.path.exists(os.path.join(self.staging_dir, "%s-%s.xml" % (name, rev))))
|
||||
self.assertTrue(exists_in_storage("staging",f"{name}-{rev}.txt"))
|
||||
fd = io.open(os.path.join(self.staging_dir, "%s-%s.xml" % (name, rev)))
|
||||
xml_contents = fd.read()
|
||||
fd.close()
|
||||
self.assertTrue(name in xml_contents)
|
||||
self.assertTrue('<?xml version="1.0" encoding="UTF-8"?>' in xml_contents)
|
||||
xml_contents = retrieve_str("staging", f"{name}-{rev}.xml")
|
||||
self.assertTrue(name in xml_contents)
|
||||
self.assertTrue('<?xml version="1.0" encoding="UTF-8"?>' in xml_contents)
|
||||
|
||||
def test_expire_submissions(self):
|
||||
s = Submission.objects.create(name="draft-ietf-mars-foo",
|
||||
|
@ -1901,6 +1938,7 @@ class SubmitTests(BaseSubmitTestCase):
|
|||
|
||||
Assumes approval allowed by AD and secretary and, optionally, chair of WG
|
||||
"""
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
class _SubmissionFactory:
|
||||
"""Helper class to generate fresh submissions"""
|
||||
def __init__(self, author, state):
|
||||
|
@ -2750,6 +2788,7 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
"""Tests of async submission-related tasks"""
|
||||
def test_process_and_accept_uploaded_submission(self):
|
||||
"""process_and_accept_uploaded_submission should properly process a submission"""
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
_today = date_today()
|
||||
xml, author = submission_file('draft-somebody-test-00', 'draft-somebody-test-00.xml', None, 'test_submission.xml')
|
||||
xml_data = xml.read()
|
||||
|
@ -2765,10 +2804,13 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-00.xml'
|
||||
with xml_path.open('w') as f:
|
||||
f.write(xml_data)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml_data)
|
||||
txt_path = xml_path.with_suffix('.txt')
|
||||
self.assertFalse(txt_path.exists())
|
||||
html_path = xml_path.with_suffix('.html')
|
||||
self.assertFalse(html_path.exists())
|
||||
for ext in ["txt", "html"]:
|
||||
self.assertFalse(exists_in_storage("staging",f"draft-somebody-test-00.{ext}"))
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
|
@ -2784,6 +2826,8 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
# at least test that these were created
|
||||
self.assertTrue(txt_path.exists())
|
||||
self.assertTrue(html_path.exists())
|
||||
for ext in ["txt", "html"]:
|
||||
self.assertTrue(exists_in_storage("staging", f"draft-somebody-test-00.{ext}"))
|
||||
self.assertEqual(submission.file_size, os.stat(txt_path).st_size)
|
||||
self.assertIn('Completed submission validation checks', submission.submissionevent_set.last().desc)
|
||||
|
||||
|
@ -2798,6 +2842,7 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
txt.close()
|
||||
|
||||
# submitter is not an author
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submitter = PersonFactory()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-somebody-test',
|
||||
|
@ -2809,12 +2854,14 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-00.xml'
|
||||
with xml_path.open('w') as f:
|
||||
f.write(xml_data)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml_data)
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
self.assertEqual(submission.state_id, 'cancel')
|
||||
self.assertIn('not one of the document authors', submission.submissionevent_set.last().desc)
|
||||
|
||||
# author has no email address in XML
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-somebody-test',
|
||||
rev='00',
|
||||
|
@ -2825,12 +2872,14 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-00.xml'
|
||||
with xml_path.open('w') as f:
|
||||
f.write(re.sub(r'<email>.*</email>', '', xml_data))
|
||||
store_str("staging", "draft-somebody-test-00.xml", re.sub(r'<email>.*</email>', '', xml_data))
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
self.assertEqual(submission.state_id, 'cancel')
|
||||
self.assertIn('Email address not found for all authors', submission.submissionevent_set.last().desc)
|
||||
|
||||
# no title
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-somebody-test',
|
||||
rev='00',
|
||||
|
@ -2841,12 +2890,14 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-00.xml'
|
||||
with xml_path.open('w') as f:
|
||||
f.write(re.sub(r'<title>.*</title>', '<title></title>', xml_data))
|
||||
store_str("staging", "draft-somebody-test-00.xml", re.sub(r'<title>.*</title>', '<title></title>', xml_data))
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
self.assertEqual(submission.state_id, 'cancel')
|
||||
self.assertIn('Could not extract a valid title', submission.submissionevent_set.last().desc)
|
||||
|
||||
# draft name mismatch
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-different-name',
|
||||
rev='00',
|
||||
|
@ -2857,12 +2908,14 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-different-name-00.xml'
|
||||
with xml_path.open('w') as f:
|
||||
f.write(xml_data)
|
||||
store_str("staging", "draft-different-name-00.xml", xml_data)
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
self.assertEqual(submission.state_id, 'cancel')
|
||||
self.assertIn('Submission rejected: XML Internet-Draft filename', submission.submissionevent_set.last().desc)
|
||||
|
||||
# rev mismatch
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-somebody-test',
|
||||
rev='01',
|
||||
|
@ -2873,12 +2926,14 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-01.xml'
|
||||
with xml_path.open('w') as f:
|
||||
f.write(xml_data)
|
||||
store_str("staging", "draft-somebody-test-01.xml", xml_data)
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
self.assertEqual(submission.state_id, 'cancel')
|
||||
self.assertIn('Submission rejected: XML Internet-Draft revision', submission.submissionevent_set.last().desc)
|
||||
|
||||
# not xml
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-somebody-test',
|
||||
rev='00',
|
||||
|
@ -2889,12 +2944,14 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
txt_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-00.txt'
|
||||
with txt_path.open('w') as f:
|
||||
f.write(txt_data)
|
||||
store_str("staging", "draft-somebody-test-00.txt", txt_data)
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
self.assertEqual(submission.state_id, 'cancel')
|
||||
self.assertIn('Only XML Internet-Draft submissions', submission.submissionevent_set.last().desc)
|
||||
|
||||
# wrong state
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-somebody-test',
|
||||
rev='00',
|
||||
|
@ -2903,8 +2960,9 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
state_id='uploaded',
|
||||
)
|
||||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-00.xml'
|
||||
with xml_path.open('w') as f:
|
||||
with xml_path.open('w') as f: # Why is this state being written if the thing that uses it is mocked out?
|
||||
f.write(xml_data)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml_data)
|
||||
with mock.patch('ietf.submit.utils.process_submission_xml') as mock_proc_xml:
|
||||
process_and_accept_uploaded_submission(submission)
|
||||
submission = Submission.objects.get(pk=submission.pk) # refresh
|
||||
|
@ -2912,6 +2970,7 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
self.assertEqual(submission.state_id, 'uploaded', 'State should not be changed')
|
||||
|
||||
# failed checker
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
submission = SubmissionFactory(
|
||||
name='draft-somebody-test',
|
||||
rev='00',
|
||||
|
@ -2922,6 +2981,7 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / 'draft-somebody-test-00.xml'
|
||||
with xml_path.open('w') as f:
|
||||
f.write(xml_data)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml_data)
|
||||
with mock.patch(
|
||||
'ietf.submit.utils.apply_checkers',
|
||||
side_effect = lambda _, __: submission.checks.create(
|
||||
|
@ -2958,6 +3018,7 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
self.assertEqual(mock_method.call_count, 0)
|
||||
|
||||
def test_process_submission_xml(self):
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml_path = Path(settings.IDSUBMIT_STAGING_PATH) / "draft-somebody-test-00.xml"
|
||||
xml, _ = submission_file(
|
||||
"draft-somebody-test-00",
|
||||
|
@ -2968,6 +3029,7 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
)
|
||||
xml_contents = xml.read()
|
||||
xml_path.write_text(xml_contents)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml_contents)
|
||||
output = process_submission_xml("draft-somebody-test", "00")
|
||||
self.assertEqual(output["filename"], "draft-somebody-test")
|
||||
self.assertEqual(output["rev"], "00")
|
||||
|
@ -2983,23 +3045,32 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
self.assertEqual(output["xml_version"], "3")
|
||||
|
||||
# Should behave on missing or partial <date> elements
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml_path.write_text(re.sub(r"<date.+>", "", xml_contents)) # strip <date...> entirely
|
||||
store_str("staging", "draft-somebody-test-00.xml", re.sub(r"<date.+>", "", xml_contents))
|
||||
output = process_submission_xml("draft-somebody-test", "00")
|
||||
self.assertEqual(output["document_date"], None)
|
||||
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml_path.write_text(re.sub(r"<date year=.+ month", "<date month", xml_contents)) # remove year
|
||||
store_str("staging", "draft-somebody-test-00.xml", re.sub(r"<date year=.+ month", "<date month", xml_contents))
|
||||
output = process_submission_xml("draft-somebody-test", "00")
|
||||
self.assertEqual(output["document_date"], date_today())
|
||||
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml_path.write_text(re.sub(r"(<date.+) month=.+day=(.+>)", r"\1 day=\2", xml_contents)) # remove month
|
||||
store_str("staging", "draft-somebody-test-00.xml", re.sub(r"(<date.+) month=.+day=(.+>)", r"\1 day=\2", xml_contents))
|
||||
output = process_submission_xml("draft-somebody-test", "00")
|
||||
self.assertEqual(output["document_date"], date_today())
|
||||
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml_path.write_text(re.sub(r"<date(.+) day=.+>", r"<date\1>", xml_contents)) # remove day
|
||||
store_str("staging", "draft-somebody-test-00.xml", re.sub(r"<date(.+) day=.+>", r"<date\1>", xml_contents))
|
||||
output = process_submission_xml("draft-somebody-test", "00")
|
||||
self.assertEqual(output["document_date"], date_today())
|
||||
|
||||
# name mismatch
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml, _ = submission_file(
|
||||
"draft-somebody-wrong-name-00", # name that appears in the file
|
||||
"draft-somebody-test-00.xml",
|
||||
|
@ -3008,10 +3079,13 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
title="Correct Draft Title",
|
||||
)
|
||||
xml_path.write_text(xml.read())
|
||||
xml.seek(0)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml.read())
|
||||
with self.assertRaisesMessage(SubmissionError, "disagrees with submission filename"):
|
||||
process_submission_xml("draft-somebody-test", "00")
|
||||
|
||||
# rev mismatch
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml, _ = submission_file(
|
||||
"draft-somebody-test-01", # name that appears in the file
|
||||
"draft-somebody-test-00.xml",
|
||||
|
@ -3020,10 +3094,13 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
title="Correct Draft Title",
|
||||
)
|
||||
xml_path.write_text(xml.read())
|
||||
xml.seek(0)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml.read())
|
||||
with self.assertRaisesMessage(SubmissionError, "disagrees with submission revision"):
|
||||
process_submission_xml("draft-somebody-test", "00")
|
||||
|
||||
# missing title
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
xml, _ = submission_file(
|
||||
"draft-somebody-test-00", # name that appears in the file
|
||||
"draft-somebody-test-00.xml",
|
||||
|
@ -3032,10 +3109,13 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
title="",
|
||||
)
|
||||
xml_path.write_text(xml.read())
|
||||
xml.seek(0)
|
||||
store_str("staging", "draft-somebody-test-00.xml", xml.read())
|
||||
with self.assertRaisesMessage(SubmissionError, "Could not extract a valid title"):
|
||||
process_submission_xml("draft-somebody-test", "00")
|
||||
|
||||
def test_process_submission_text(self):
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
txt_path = Path(settings.IDSUBMIT_STAGING_PATH) / "draft-somebody-test-00.txt"
|
||||
txt, _ = submission_file(
|
||||
"draft-somebody-test-00",
|
||||
|
@ -3045,6 +3125,8 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
title="Correct Draft Title",
|
||||
)
|
||||
txt_path.write_text(txt.read())
|
||||
txt.seek(0)
|
||||
store_str("staging", "draft-somebody-test-00.txt", txt.read())
|
||||
output = process_submission_text("draft-somebody-test", "00")
|
||||
self.assertEqual(output["filename"], "draft-somebody-test")
|
||||
self.assertEqual(output["rev"], "00")
|
||||
|
@ -3060,6 +3142,7 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
self.assertIsNone(output["xml_version"])
|
||||
|
||||
# name mismatch
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
txt, _ = submission_file(
|
||||
"draft-somebody-wrong-name-00", # name that appears in the file
|
||||
"draft-somebody-test-00.txt",
|
||||
|
@ -3069,11 +3152,14 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
)
|
||||
with txt_path.open('w') as fd:
|
||||
fd.write(txt.read())
|
||||
txt.seek(0)
|
||||
store_str("staging", "draft-somebody-test-00.txt", txt.read())
|
||||
txt.close()
|
||||
with self.assertRaisesMessage(SubmissionError, 'disagrees with submission filename'):
|
||||
process_submission_text("draft-somebody-test", "00")
|
||||
|
||||
# rev mismatch
|
||||
TestBlobstoreManager().emptyTestBlobstores()
|
||||
txt, _ = submission_file(
|
||||
"draft-somebody-test-01", # name that appears in the file
|
||||
"draft-somebody-test-00.txt",
|
||||
|
@ -3083,6 +3169,8 @@ class AsyncSubmissionTests(BaseSubmitTestCase):
|
|||
)
|
||||
with txt_path.open('w') as fd:
|
||||
fd.write(txt.read())
|
||||
txt.seek(0)
|
||||
store_str("staging", "draft-somebody-test-00.txt", txt.read())
|
||||
txt.close()
|
||||
with self.assertRaisesMessage(SubmissionError, 'disagrees with submission revision'):
|
||||
process_submission_text("draft-somebody-test", "00")
|
||||
|
@ -3221,6 +3309,7 @@ class PostSubmissionTests(BaseSubmitTestCase):
|
|||
path = Path(self.staging_dir)
|
||||
for ext in ['txt', 'xml', 'pdf', 'md']:
|
||||
(path / f'{draft.name}-{draft.rev}.{ext}').touch()
|
||||
store_str("staging", f"{draft.name}-{draft.rev}.{ext}", "")
|
||||
files = find_submission_filenames(draft)
|
||||
self.assertCountEqual(
|
||||
files,
|
||||
|
@ -3280,6 +3369,7 @@ class ValidateSubmissionFilenameTests(BaseSubmitTestCase):
|
|||
new_wg_doc = WgDraftFactory(rev='01', relations=[('replaces',old_wg_doc)])
|
||||
path = Path(self.archive_dir) / f'{new_wg_doc.name}-{new_wg_doc.rev}.txt'
|
||||
path.touch()
|
||||
store_str("staging", f"{new_wg_doc.name}-{new_wg_doc.rev}.txt", "")
|
||||
|
||||
bad_revs = (None, '', '2', 'aa', '00', '01', '100', '002', u'öö')
|
||||
for rev in bad_revs:
|
||||
|
|
|
@ -36,6 +36,7 @@ from ietf.doc.models import ( Document, State, DocEvent, SubmissionDocEvent,
|
|||
DocumentAuthor, AddedMessageEvent )
|
||||
from ietf.doc.models import NewRevisionDocEvent
|
||||
from ietf.doc.models import RelatedDocument, DocRelationshipName, DocExtResource
|
||||
from ietf.doc.storage_utils import remove_from_storage, retrieve_bytes, store_bytes, store_file, store_str
|
||||
from ietf.doc.utils import (add_state_change_event, rebuild_reference_relations,
|
||||
set_replaces_for_document, prettify_std_name, update_doc_extresources,
|
||||
can_edit_docextresources, update_documentauthors, update_action_holders,
|
||||
|
@ -455,6 +456,7 @@ def post_submission(request, submission, approved_doc_desc, approved_subm_desc):
|
|||
from ietf.doc.expire import move_draft_files_to_archive
|
||||
move_draft_files_to_archive(draft, prev_rev)
|
||||
|
||||
submission.draft = draft
|
||||
move_files_to_repository(submission)
|
||||
submission.state = DraftSubmissionStateName.objects.get(slug="posted")
|
||||
log.log(f"{submission.name}: moved files")
|
||||
|
@ -488,7 +490,6 @@ def post_submission(request, submission, approved_doc_desc, approved_subm_desc):
|
|||
if new_possibly_replaces:
|
||||
send_review_possibly_replaces_request(request, draft, submitter_info)
|
||||
|
||||
submission.draft = draft
|
||||
submission.save()
|
||||
|
||||
create_submission_event(request, submission, approved_subm_desc)
|
||||
|
@ -498,6 +499,7 @@ def post_submission(request, submission, approved_doc_desc, approved_subm_desc):
|
|||
ref_rev_file_name = os.path.join(os.path.join(settings.BIBXML_BASE_PATH, 'bibxml-ids'), 'reference.I-D.%s-%s.xml' % (draft.name, draft.rev ))
|
||||
with io.open(ref_rev_file_name, "w", encoding='utf-8') as f:
|
||||
f.write(ref_text)
|
||||
store_str("bibxml-ids", f"reference.I-D.{draft.name}-{draft.rev}.txt", ref_text) # TODO-BLOBSTORE verify with test
|
||||
|
||||
log.log(f"{submission.name}: done")
|
||||
|
||||
|
@ -666,6 +668,12 @@ def move_files_to_repository(submission):
|
|||
ftp_dest = Path(settings.FTP_DIR) / "internet-drafts" / dest.name
|
||||
os.link(dest, all_archive_dest)
|
||||
os.link(dest, ftp_dest)
|
||||
# Shadow what's happening to the fs in the blobstores. When the stores become
|
||||
# authoritative, the source and dest checks will need to apply to the stores instead.
|
||||
content_bytes = retrieve_bytes("staging", fname)
|
||||
store_bytes("active-draft", f"{ext}/{fname}", content_bytes)
|
||||
submission.draft.store_bytes(f"{ext}/{fname}", content_bytes)
|
||||
remove_from_storage("staging", fname)
|
||||
elif dest.exists():
|
||||
log.log("Intended to move '%s' to '%s', but found source missing while destination exists.")
|
||||
elif f".{ext}" in submission.file_types.split(','):
|
||||
|
@ -678,6 +686,7 @@ def remove_staging_files(name, rev):
|
|||
exts = [f'.{ext}' for ext in settings.IDSUBMIT_FILE_TYPES]
|
||||
for ext in exts:
|
||||
basename.with_suffix(ext).unlink(missing_ok=True)
|
||||
remove_from_storage("staging", basename.with_suffix(ext).name, warn_if_missing=False)
|
||||
|
||||
|
||||
def remove_submission_files(submission):
|
||||
|
@ -766,6 +775,8 @@ def save_files(form):
|
|||
for chunk in f.chunks():
|
||||
destination.write(chunk)
|
||||
log.log("saved file %s" % name)
|
||||
f.seek(0)
|
||||
store_file("staging", f"{form.filename}-{form.revision}.{ext}", f)
|
||||
return file_name
|
||||
|
||||
|
||||
|
@ -988,6 +999,10 @@ def render_missing_formats(submission):
|
|||
xml_version,
|
||||
)
|
||||
)
|
||||
# When the blobstores become autoritative - the guard at the
|
||||
# containing if statement needs to be based on the store
|
||||
with Path(txt_path).open("rb") as f:
|
||||
store_file("staging", f"{submission.name}-{submission.rev}.txt", f)
|
||||
|
||||
# --- Convert to html ---
|
||||
html_path = staging_path(submission.name, submission.rev, '.html')
|
||||
|
@ -1010,6 +1025,8 @@ def render_missing_formats(submission):
|
|||
xml_version,
|
||||
)
|
||||
)
|
||||
with Path(html_path).open("rb") as f:
|
||||
store_file("staging", f"{submission.name}-{submission.rev}.html", f)
|
||||
|
||||
|
||||
def accept_submission(submission: Submission, request: Optional[HttpRequest] = None, autopost=False):
|
||||
|
@ -1361,6 +1378,7 @@ def process_and_validate_submission(submission):
|
|||
except SubmissionError:
|
||||
raise # pass SubmissionErrors up the stack
|
||||
except Exception as err:
|
||||
# (this is a good point to just `raise err` when diagnosing Submission test failures)
|
||||
# convert other exceptions into SubmissionErrors
|
||||
log.log(f'Unexpected exception while processing submission {submission.pk}.')
|
||||
log.log(traceback.format_exc())
|
||||
|
|
|
@ -1,8 +1,56 @@
|
|||
# Copyright The IETF Trust 2020-2025, All Rights Reserved
|
||||
"""Django Storage classes"""
|
||||
from pathlib import Path
|
||||
|
||||
from django.conf import settings
|
||||
from django.core.files.storage import FileSystemStorage
|
||||
from ietf.doc.storage_utils import store_file
|
||||
from .log import log
|
||||
|
||||
|
||||
class NoLocationMigrationFileSystemStorage(FileSystemStorage):
|
||||
|
||||
def deconstruct(obj): # pylint: disable=no-self-argument
|
||||
path, args, kwargs = FileSystemStorage.deconstruct(obj)
|
||||
kwargs["location"] = None
|
||||
return (path, args, kwargs)
|
||||
def deconstruct(self):
|
||||
path, args, kwargs = super().deconstruct()
|
||||
kwargs["location"] = None # don't record location in migrations
|
||||
return path, args, kwargs
|
||||
|
||||
|
||||
class BlobShadowFileSystemStorage(NoLocationMigrationFileSystemStorage):
|
||||
"""FileSystemStorage that shadows writes to the blob store as well
|
||||
|
||||
Strips directories from the filename when naming the blob.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*, # disallow positional arguments
|
||||
kind: str,
|
||||
location=None,
|
||||
base_url=None,
|
||||
file_permissions_mode=None,
|
||||
directory_permissions_mode=None,
|
||||
):
|
||||
self.kind = kind
|
||||
super().__init__(
|
||||
location, base_url, file_permissions_mode, directory_permissions_mode
|
||||
)
|
||||
|
||||
def save(self, name, content, max_length=None):
|
||||
# Write content to the filesystem - this deals with chunks, etc...
|
||||
saved_name = super().save(name, content, max_length)
|
||||
|
||||
if settings.ENABLE_BLOBSTORAGE:
|
||||
# Retrieve the content and write to the blob store
|
||||
blob_name = Path(saved_name).name # strips path
|
||||
try:
|
||||
with self.open(saved_name, "rb") as f:
|
||||
store_file(self.kind, blob_name, f, allow_overwrite=True)
|
||||
except Exception as err:
|
||||
log(f"Failed to shadow {saved_name} at {self.kind}:{blob_name}: {err}")
|
||||
return saved_name # includes the path!
|
||||
|
||||
def deconstruct(self):
|
||||
path, args, kwargs = super().deconstruct()
|
||||
kwargs["kind"] = "" # don't record "kind" in migrations
|
||||
return path, args, kwargs
|
||||
|
|
|
@ -48,6 +48,8 @@ import pathlib
|
|||
import subprocess
|
||||
import tempfile
|
||||
import copy
|
||||
import boto3
|
||||
import botocore.config
|
||||
import factory.random
|
||||
import urllib3
|
||||
import warnings
|
||||
|
@ -85,6 +87,8 @@ from ietf.utils.management.commands import pyflakes
|
|||
from ietf.utils.test_smtpserver import SMTPTestServerDriver
|
||||
from ietf.utils.test_utils import TestCase
|
||||
|
||||
from mypy_boto3_s3.service_resource import Bucket
|
||||
|
||||
|
||||
loaded_templates = set()
|
||||
visited_urls = set()
|
||||
|
@ -722,9 +726,25 @@ class IetfTestRunner(DiscoverRunner):
|
|||
parser.add_argument('--rerun-until-failure',
|
||||
action='store_true', dest='rerun', default=False,
|
||||
help='Run the indicated tests in a loop until a failure occurs. ' )
|
||||
parser.add_argument('--no-manage-blobstore', action='store_false', dest='manage_blobstore',
|
||||
help='Disable creating/deleting test buckets in the blob store.'
|
||||
'When this argument is used, a set of buckets with "test-" prefixed to their '
|
||||
'names must already exist.')
|
||||
|
||||
def __init__(self, ignore_lower_coverage=False, skip_coverage=False, save_version_coverage=None, html_report=None, permit_mixed_migrations=None, show_logging=None, validate_html=None, validate_html_harder=None, rerun=None, **kwargs):
|
||||
#
|
||||
def __init__(
|
||||
self,
|
||||
ignore_lower_coverage=False,
|
||||
skip_coverage=False,
|
||||
save_version_coverage=None,
|
||||
html_report=None,
|
||||
permit_mixed_migrations=None,
|
||||
show_logging=None,
|
||||
validate_html=None,
|
||||
validate_html_harder=None,
|
||||
rerun=None,
|
||||
manage_blobstore=True,
|
||||
**kwargs
|
||||
): #
|
||||
self.ignore_lower_coverage = ignore_lower_coverage
|
||||
self.check_coverage = not skip_coverage
|
||||
self.save_version_coverage = save_version_coverage
|
||||
|
@ -752,6 +772,8 @@ class IetfTestRunner(DiscoverRunner):
|
|||
# contains parent classes to later subclasses, the parent classes will determine the ordering, so use the most
|
||||
# specific classes necessary to get the right ordering:
|
||||
self.reorder_by = (PyFlakesTestCase, MyPyTest,) + self.reorder_by + (StaticLiveServerTestCase, TemplateTagTest, CoverageTest,)
|
||||
#self.buckets = set()
|
||||
self.blobstoremanager = TestBlobstoreManager() if manage_blobstore else None
|
||||
|
||||
def setup_test_environment(self, **kwargs):
|
||||
global template_coverage_collection
|
||||
|
@ -936,6 +958,9 @@ class IetfTestRunner(DiscoverRunner):
|
|||
print(" (extra pedantically)")
|
||||
self.vnu = start_vnu_server()
|
||||
|
||||
if self.blobstoremanager is not None:
|
||||
self.blobstoremanager.createTestBlobstores()
|
||||
|
||||
super(IetfTestRunner, self).setup_test_environment(**kwargs)
|
||||
|
||||
def teardown_test_environment(self, **kwargs):
|
||||
|
@ -966,6 +991,9 @@ class IetfTestRunner(DiscoverRunner):
|
|||
if self.vnu:
|
||||
self.vnu.terminate()
|
||||
|
||||
if self.blobstoremanager is not None:
|
||||
self.blobstoremanager.destroyTestBlobstores()
|
||||
|
||||
super(IetfTestRunner, self).teardown_test_environment(**kwargs)
|
||||
|
||||
def validate(self, testcase):
|
||||
|
@ -1220,3 +1248,39 @@ class IetfLiveServerTestCase(StaticLiveServerTestCase):
|
|||
for k, v in self.replaced_settings.items():
|
||||
setattr(settings, k, v)
|
||||
super().tearDown()
|
||||
|
||||
class TestBlobstoreManager():
|
||||
# N.B. buckets and blobstore are intentional Class-level attributes
|
||||
buckets: set[Bucket] = set()
|
||||
|
||||
blobstore = boto3.resource("s3",
|
||||
endpoint_url="http://blobstore:9000",
|
||||
aws_access_key_id="minio_root",
|
||||
aws_secret_access_key="minio_pass",
|
||||
aws_session_token=None,
|
||||
config = botocore.config.Config(signature_version="s3v4"),
|
||||
#config=botocore.config.Config(signature_version=botocore.UNSIGNED),
|
||||
verify=False
|
||||
)
|
||||
|
||||
def createTestBlobstores(self):
|
||||
for storagename in settings.MORE_STORAGE_NAMES:
|
||||
bucketname = f"test-{storagename}"
|
||||
try:
|
||||
bucket = self.blobstore.create_bucket(Bucket=bucketname)
|
||||
self.buckets.add(bucket)
|
||||
except self.blobstore.meta.client.exceptions.BucketAlreadyOwnedByYou:
|
||||
bucket = self.blobstore.Bucket(bucketname)
|
||||
self.buckets.add(bucket)
|
||||
|
||||
def destroyTestBlobstores(self):
|
||||
self.emptyTestBlobstores(destroy=True)
|
||||
|
||||
def emptyTestBlobstores(self, destroy=False):
|
||||
# debug.show('f"Asked to empty test blobstores with destroy={destroy}"')
|
||||
for bucket in self.buckets:
|
||||
bucket.objects.delete()
|
||||
if destroy:
|
||||
bucket.delete()
|
||||
if destroy:
|
||||
self.buckets = set()
|
||||
|
|
|
@ -6,7 +6,9 @@ from email.utils import parseaddr
|
|||
import json
|
||||
|
||||
from ietf import __release_hash__
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import * # pyflakes:ignore
|
||||
from ietf.settings import STORAGES, MORE_STORAGE_NAMES, BLOBSTORAGE_CONNECT_TIMEOUT, BLOBSTORAGE_READ_TIMEOUT, BLOBSTORAGE_MAX_ATTEMPTS
|
||||
import botocore.config
|
||||
|
||||
|
||||
def _multiline_to_list(s):
|
||||
|
@ -29,7 +31,7 @@ _SECRET_KEY = os.environ.get("DATATRACKER_DJANGO_SECRET_KEY", None)
|
|||
if _SECRET_KEY is not None:
|
||||
SECRET_KEY = _SECRET_KEY
|
||||
else:
|
||||
raise RuntimeError("DATATRACKER_DJANGO_SECRET_KEY must be set")
|
||||
raise RuntimeError("DATATRACKER_DJANGO_SECRET_KEY must be set")
|
||||
|
||||
_NOMCOM_APP_SECRET_B64 = os.environ.get("DATATRACKER_NOMCOM_APP_SECRET_B64", None)
|
||||
if _NOMCOM_APP_SECRET_B64 is not None:
|
||||
|
@ -41,7 +43,7 @@ _IANA_SYNC_PASSWORD = os.environ.get("DATATRACKER_IANA_SYNC_PASSWORD", None)
|
|||
if _IANA_SYNC_PASSWORD is not None:
|
||||
IANA_SYNC_PASSWORD = _IANA_SYNC_PASSWORD
|
||||
else:
|
||||
raise RuntimeError("DATATRACKER_IANA_SYNC_PASSWORD must be set")
|
||||
raise RuntimeError("DATATRACKER_IANA_SYNC_PASSWORD must be set")
|
||||
|
||||
_RFC_EDITOR_SYNC_PASSWORD = os.environ.get("DATATRACKER_RFC_EDITOR_SYNC_PASSWORD", None)
|
||||
if _RFC_EDITOR_SYNC_PASSWORD is not None:
|
||||
|
@ -59,25 +61,25 @@ _GITHUB_BACKUP_API_KEY = os.environ.get("DATATRACKER_GITHUB_BACKUP_API_KEY", Non
|
|||
if _GITHUB_BACKUP_API_KEY is not None:
|
||||
GITHUB_BACKUP_API_KEY = _GITHUB_BACKUP_API_KEY
|
||||
else:
|
||||
raise RuntimeError("DATATRACKER_GITHUB_BACKUP_API_KEY must be set")
|
||||
raise RuntimeError("DATATRACKER_GITHUB_BACKUP_API_KEY must be set")
|
||||
|
||||
_API_KEY_TYPE = os.environ.get("DATATRACKER_API_KEY_TYPE", None)
|
||||
if _API_KEY_TYPE is not None:
|
||||
API_KEY_TYPE = _API_KEY_TYPE
|
||||
else:
|
||||
raise RuntimeError("DATATRACKER_API_KEY_TYPE must be set")
|
||||
raise RuntimeError("DATATRACKER_API_KEY_TYPE must be set")
|
||||
|
||||
_API_PUBLIC_KEY_PEM_B64 = os.environ.get("DATATRACKER_API_PUBLIC_KEY_PEM_B64", None)
|
||||
if _API_PUBLIC_KEY_PEM_B64 is not None:
|
||||
API_PUBLIC_KEY_PEM = b64decode(_API_PUBLIC_KEY_PEM_B64)
|
||||
else:
|
||||
raise RuntimeError("DATATRACKER_API_PUBLIC_KEY_PEM_B64 must be set")
|
||||
raise RuntimeError("DATATRACKER_API_PUBLIC_KEY_PEM_B64 must be set")
|
||||
|
||||
_API_PRIVATE_KEY_PEM_B64 = os.environ.get("DATATRACKER_API_PRIVATE_KEY_PEM_B64", None)
|
||||
if _API_PRIVATE_KEY_PEM_B64 is not None:
|
||||
API_PRIVATE_KEY_PEM = b64decode(_API_PRIVATE_KEY_PEM_B64)
|
||||
else:
|
||||
raise RuntimeError("DATATRACKER_API_PRIVATE_KEY_PEM_B64 must be set")
|
||||
raise RuntimeError("DATATRACKER_API_PRIVATE_KEY_PEM_B64 must be set")
|
||||
|
||||
# Set DEBUG if DATATRACKER_DEBUG env var is the word "true"
|
||||
DEBUG = os.environ.get("DATATRACKER_DEBUG", "false").lower() == "true"
|
||||
|
@ -102,7 +104,9 @@ DATABASES = {
|
|||
# Configure persistent connections. A setting of 0 is Django's default.
|
||||
_conn_max_age = os.environ.get("DATATRACKER_DB_CONN_MAX_AGE", "0")
|
||||
# A string "none" means unlimited age.
|
||||
DATABASES["default"]["CONN_MAX_AGE"] = None if _conn_max_age.lower() == "none" else int(_conn_max_age)
|
||||
DATABASES["default"]["CONN_MAX_AGE"] = (
|
||||
None if _conn_max_age.lower() == "none" else int(_conn_max_age)
|
||||
)
|
||||
# Enable connection health checks if DATATRACKER_DB_CONN_HEALTH_CHECK is the string "true"
|
||||
_conn_health_checks = bool(
|
||||
os.environ.get("DATATRACKER_DB_CONN_HEALTH_CHECKS", "false").lower() == "true"
|
||||
|
@ -114,9 +118,11 @@ _admins_str = os.environ.get("DATATRACKER_ADMINS", None)
|
|||
if _admins_str is not None:
|
||||
ADMINS = [parseaddr(admin) for admin in _multiline_to_list(_admins_str)]
|
||||
else:
|
||||
raise RuntimeError("DATATRACKER_ADMINS must be set")
|
||||
raise RuntimeError("DATATRACKER_ADMINS must be set")
|
||||
|
||||
USING_DEBUG_EMAIL_SERVER = os.environ.get("DATATRACKER_EMAIL_DEBUG", "false").lower() == "true"
|
||||
USING_DEBUG_EMAIL_SERVER = (
|
||||
os.environ.get("DATATRACKER_EMAIL_DEBUG", "false").lower() == "true"
|
||||
)
|
||||
EMAIL_HOST = os.environ.get("DATATRACKER_EMAIL_HOST", "localhost")
|
||||
EMAIL_PORT = int(os.environ.get("DATATRACKER_EMAIL_PORT", "2025"))
|
||||
|
||||
|
@ -126,7 +132,7 @@ if _celery_password is None:
|
|||
CELERY_BROKER_URL = "amqp://datatracker:{password}@{host}/{queue}".format(
|
||||
host=os.environ.get("RABBITMQ_HOSTNAME", "dt-rabbitmq"),
|
||||
password=_celery_password,
|
||||
queue=os.environ.get("RABBITMQ_QUEUE", "dt")
|
||||
queue=os.environ.get("RABBITMQ_QUEUE", "dt"),
|
||||
)
|
||||
|
||||
IANA_SYNC_USERNAME = "ietfsync"
|
||||
|
@ -140,10 +146,10 @@ if _registration_api_key is None:
|
|||
raise RuntimeError("DATATRACKER_REGISTRATION_API_KEY must be set")
|
||||
STATS_REGISTRATION_ATTENDEES_JSON_URL = f"https://registration.ietf.org/{{number}}/attendees/?apikey={_registration_api_key}"
|
||||
|
||||
#FIRST_CUTOFF_DAYS = 12
|
||||
#SECOND_CUTOFF_DAYS = 12
|
||||
#SUBMISSION_CUTOFF_DAYS = 26
|
||||
#SUBMISSION_CORRECTION_DAYS = 57
|
||||
# FIRST_CUTOFF_DAYS = 12
|
||||
# SECOND_CUTOFF_DAYS = 12
|
||||
# SUBMISSION_CUTOFF_DAYS = 26
|
||||
# SUBMISSION_CORRECTION_DAYS = 57
|
||||
MEETING_MATERIALS_SUBMISSION_CUTOFF_DAYS = 26
|
||||
MEETING_MATERIALS_SUBMISSION_CORRECTION_DAYS = 54
|
||||
|
||||
|
@ -155,7 +161,7 @@ _MEETECHO_CLIENT_SECRET = os.environ.get("DATATRACKER_MEETECHO_CLIENT_SECRET", N
|
|||
if _MEETECHO_CLIENT_ID is not None and _MEETECHO_CLIENT_SECRET is not None:
|
||||
MEETECHO_API_CONFIG = {
|
||||
"api_base": os.environ.get(
|
||||
"DATATRACKER_MEETECHO_API_BASE",
|
||||
"DATATRACKER_MEETECHO_API_BASE",
|
||||
"https://meetings.conf.meetecho.com/api/v1/",
|
||||
),
|
||||
"client_id": _MEETECHO_CLIENT_ID,
|
||||
|
@ -173,7 +179,9 @@ if "DATATRACKER_APP_API_TOKENS_JSON_B64" in os.environ:
|
|||
raise RuntimeError(
|
||||
"Only one of DATATRACKER_APP_API_TOKENS_JSON and DATATRACKER_APP_API_TOKENS_JSON_B64 may be set"
|
||||
)
|
||||
_APP_API_TOKENS_JSON = b64decode(os.environ.get("DATATRACKER_APP_API_TOKENS_JSON_B64"))
|
||||
_APP_API_TOKENS_JSON = b64decode(
|
||||
os.environ.get("DATATRACKER_APP_API_TOKENS_JSON_B64")
|
||||
)
|
||||
else:
|
||||
_APP_API_TOKENS_JSON = os.environ.get("DATATRACKER_APP_API_TOKENS_JSON", None)
|
||||
|
||||
|
@ -189,7 +197,9 @@ IDSUBMIT_MAX_DAILY_SAME_SUBMITTER = 5000
|
|||
|
||||
# Leave DATATRACKER_MATOMO_SITE_ID unset to disable Matomo reporting
|
||||
if "DATATRACKER_MATOMO_SITE_ID" in os.environ:
|
||||
MATOMO_DOMAIN_PATH = os.environ.get("DATATRACKER_MATOMO_DOMAIN_PATH", "analytics.ietf.org")
|
||||
MATOMO_DOMAIN_PATH = os.environ.get(
|
||||
"DATATRACKER_MATOMO_DOMAIN_PATH", "analytics.ietf.org"
|
||||
)
|
||||
MATOMO_SITE_ID = os.environ.get("DATATRACKER_MATOMO_SITE_ID")
|
||||
MATOMO_DISABLE_COOKIES = True
|
||||
|
||||
|
@ -197,9 +207,13 @@ if "DATATRACKER_MATOMO_SITE_ID" in os.environ:
|
|||
_SCOUT_KEY = os.environ.get("DATATRACKER_SCOUT_KEY", None)
|
||||
if _SCOUT_KEY is not None:
|
||||
if SERVER_MODE == "production":
|
||||
PROD_PRE_APPS = ["scout_apm.django", ]
|
||||
PROD_PRE_APPS = [
|
||||
"scout_apm.django",
|
||||
]
|
||||
else:
|
||||
DEV_PRE_APPS = ["scout_apm.django", ]
|
||||
DEV_PRE_APPS = [
|
||||
"scout_apm.django",
|
||||
]
|
||||
SCOUT_MONITOR = True
|
||||
SCOUT_KEY = _SCOUT_KEY
|
||||
SCOUT_NAME = os.environ.get("DATATRACKER_SCOUT_NAME", "Datatracker")
|
||||
|
@ -216,16 +230,17 @@ if _SCOUT_KEY is not None:
|
|||
STATIC_URL = os.environ.get("DATATRACKER_STATIC_URL", None)
|
||||
if STATIC_URL is None:
|
||||
from ietf import __version__
|
||||
|
||||
STATIC_URL = f"https://static.ietf.org/dt/{__version__}/"
|
||||
|
||||
# Set these to the same as "production" in settings.py, whether production mode or not
|
||||
MEDIA_ROOT = "/a/www/www6s/lib/dt/media/"
|
||||
MEDIA_URL = "https://www.ietf.org/lib/dt/media/"
|
||||
MEDIA_URL = "https://www.ietf.org/lib/dt/media/"
|
||||
PHOTOS_DIRNAME = "photo"
|
||||
PHOTOS_DIR = MEDIA_ROOT + PHOTOS_DIRNAME
|
||||
|
||||
# Normally only set for debug, but needed until we have a real FS
|
||||
DJANGO_VITE_MANIFEST_PATH = os.path.join(BASE_DIR, 'static/dist-neue/manifest.json')
|
||||
DJANGO_VITE_MANIFEST_PATH = os.path.join(BASE_DIR, "static/dist-neue/manifest.json")
|
||||
|
||||
# Binaries that are different in the docker image
|
||||
DE_GFM_BINARY = "/usr/local/bin/de-gfm"
|
||||
|
@ -235,6 +250,7 @@ IDSUBMIT_IDNITS_BINARY = "/usr/local/bin/idnits"
|
|||
MEMCACHED_HOST = os.environ.get("DT_MEMCACHED_SERVICE_HOST", "127.0.0.1")
|
||||
MEMCACHED_PORT = os.environ.get("DT_MEMCACHED_SERVICE_PORT", "11211")
|
||||
from ietf import __version__
|
||||
|
||||
CACHES = {
|
||||
"default": {
|
||||
"BACKEND": "ietf.utils.cache.LenientMemcacheCache",
|
||||
|
@ -285,3 +301,46 @@ if _csrf_trusted_origins_str is not None:
|
|||
|
||||
# Console logs as JSON instead of plain when running in k8s
|
||||
LOGGING["handlers"]["console"]["formatter"] = "json"
|
||||
|
||||
# Configure storages for the blob store
|
||||
_blob_store_endpoint_url = os.environ.get("DATATRACKER_BLOB_STORE_ENDPOINT_URL")
|
||||
_blob_store_access_key = os.environ.get("DATATRACKER_BLOB_STORE_ACCESS_KEY")
|
||||
_blob_store_secret_key = os.environ.get("DATATRACKER_BLOB_STORE_SECRET_KEY")
|
||||
if None in (_blob_store_endpoint_url, _blob_store_access_key, _blob_store_secret_key):
|
||||
raise RuntimeError(
|
||||
"All of DATATRACKER_BLOB_STORE_ENDPOINT_URL, DATATRACKER_BLOB_STORE_ACCESS_KEY, "
|
||||
"and DATATRACKER_BLOB_STORE_SECRET_KEY must be set"
|
||||
)
|
||||
_blob_store_bucket_prefix = os.environ.get(
|
||||
"DATATRACKER_BLOB_STORE_BUCKET_PREFIX", ""
|
||||
)
|
||||
_blob_store_enable_profiling = (
|
||||
os.environ.get("DATATRACKER_BLOB_STORE_ENABLE_PROFILING", "false").lower() == "true"
|
||||
)
|
||||
_blob_store_max_attempts = (
|
||||
os.environ.get("DATATRACKER_BLOB_STORE_MAX_ATTEMPTS", BLOBSTORAGE_MAX_ATTEMPTS)
|
||||
)
|
||||
_blob_store_connect_timeout = (
|
||||
os.environ.get("DATATRACKER_BLOB_STORE_CONNECT_TIMEOUT", BLOBSTORAGE_CONNECT_TIMEOUT)
|
||||
)
|
||||
_blob_store_read_timeout = (
|
||||
os.environ.get("DATATRACKER_BLOB_STORE_READ_TIMEOUT", BLOBSTORAGE_READ_TIMEOUT)
|
||||
)
|
||||
for storage_name in MORE_STORAGE_NAMES:
|
||||
STORAGES[storage_name] = {
|
||||
"BACKEND": "ietf.doc.storage_backends.CustomS3Storage",
|
||||
"OPTIONS": dict(
|
||||
endpoint_url=_blob_store_endpoint_url,
|
||||
access_key=_blob_store_access_key,
|
||||
secret_key=_blob_store_secret_key,
|
||||
security_token=None,
|
||||
client_config=botocore.config.Config(
|
||||
signature_version="s3v4",
|
||||
connect_timeout=_blob_store_connect_timeout,
|
||||
read_timeout=_blob_store_read_timeout,
|
||||
retries={"total_max_attempts": _blob_store_max_attempts},
|
||||
),
|
||||
bucket_name=f"{_blob_store_bucket_prefix}{storage_name}".strip(),
|
||||
ietf_log_blob_timing=_blob_store_enable_profiling,
|
||||
),
|
||||
}
|
||||
|
|
|
@ -6,6 +6,9 @@ beautifulsoup4>=4.11.1 # Only used in tests
|
|||
bibtexparser>=1.2.0 # Only used in tests
|
||||
bleach>=6
|
||||
types-bleach>=6
|
||||
boto3>=1.35,<1.36
|
||||
boto3-stubs[s3]>=1.35,<1.36
|
||||
botocore>=1.35,<1.36
|
||||
celery>=5.2.6
|
||||
coverage>=4.5.4,<5.0 # Coverage 5.x moves from a json database to SQLite. Moving to 5.x will require substantial rewrites in ietf.utils.test_runner and ietf.release.views
|
||||
defusedxml>=0.7.1 # for TastyPie when using xml; not a declared dependency
|
||||
|
@ -21,6 +24,7 @@ django-markup>=1.5 # Limited use - need to reconcile against direct use of ma
|
|||
django-oidc-provider==0.8.2 # 0.8.3 changes logout flow and claim return
|
||||
django-referrer-policy>=1.0
|
||||
django-simple-history>=3.0.0
|
||||
django-storages>=1.14.4
|
||||
django-stubs>=4.2.7,<5 # The django-stubs version used determines the the mypy version indicated below
|
||||
django-tastypie>=0.14.7,<0.15.0 # Version must be locked in sync with version of Django
|
||||
django-vite>=2.0.2,<3
|
||||
|
@ -75,7 +79,7 @@ tblib>=1.7.0 # So that the django test runner provides tracebacks
|
|||
tlds>=2022042700 # Used to teach bleach about which TLDs currently exist
|
||||
tqdm>=4.64.0
|
||||
Unidecode>=1.3.4
|
||||
urllib3>=2
|
||||
urllib3>=1.26,<2
|
||||
weasyprint>=59
|
||||
xml2rfc[pdf]>=3.23.0
|
||||
xym>=0.6,<1.0
|
||||
|
|
Loading…
Reference in a new issue