login page up
This commit is contained in:
parent
b5b28451b5
commit
ec5842fd74
@ -0,0 +1 @@
|
||||
pip
|
@ -0,0 +1,31 @@
|
||||
Copyright (c) 2014 by Shipeng Feng.
|
||||
|
||||
Some rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following
|
||||
disclaimer in the documentation and/or other materials provided
|
||||
with the distribution.
|
||||
|
||||
* The names of the contributors may not be used to endorse or
|
||||
promote products derived from this software without specific
|
||||
prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
@ -0,0 +1,34 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: Flask-Session
|
||||
Version: 0.4.0
|
||||
Summary: Adds server-side session support to your Flask application
|
||||
Home-page: https://github.com/fengsp/flask-session
|
||||
Author: Shipeng Feng
|
||||
Author-email: fsp261@gmail.com
|
||||
License: BSD
|
||||
Platform: any
|
||||
Classifier: Environment :: Web Environment
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: BSD License
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Topic :: Internet :: WWW/HTTP :: Dynamic Content
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Requires-Dist: Flask (>=0.8)
|
||||
Requires-Dist: cachelib
|
||||
|
||||
|
||||
Flask-Session
|
||||
-------------
|
||||
|
||||
Flask-Session is an extension for Flask that adds support for
|
||||
Server-side Session to your application.
|
||||
|
||||
Links
|
||||
`````
|
||||
|
||||
* `development version
|
||||
<https://github.com/fengsp/flask-session/zipball/master#egg=Flask-dev>`_
|
||||
|
||||
|
||||
|
@ -0,0 +1,11 @@
|
||||
Flask_Session-0.4.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
Flask_Session-0.4.0.dist-info/LICENSE,sha256=S3lNnKCO6cV706SpiqaHVtNMshfWXZAIhYnZx-1W4q4,1455
|
||||
Flask_Session-0.4.0.dist-info/METADATA,sha256=z5fKBiEzqMGBSuOVkPmc7Dkk-XbA7BJLmU6nDLrnw3Q,924
|
||||
Flask_Session-0.4.0.dist-info/RECORD,,
|
||||
Flask_Session-0.4.0.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
Flask_Session-0.4.0.dist-info/WHEEL,sha256=Z-nyYpwrcSqxfdux5Mbn_DQ525iP7J2DG3JgGvOYyTQ,110
|
||||
Flask_Session-0.4.0.dist-info/top_level.txt,sha256=NLMy-fPmNVJe6dlgHK_74-fLp-pQl_X60Gi06-miwdk,14
|
||||
flask_session/__init__.py,sha256=p_uu-alHjb7wP651oI63IrEOHJb3JtWEwTGz1QS3lVA,4223
|
||||
flask_session/__pycache__/__init__.cpython-310.pyc,,
|
||||
flask_session/__pycache__/sessions.cpython-310.pyc,,
|
||||
flask_session/sessions.py,sha256=cNYNqDhLIb6CmqDzhwgJ_Y2fx02tDMsfkM7m1F6aeyk,22431
|
@ -0,0 +1,6 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.36.2)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py2-none-any
|
||||
Tag: py3-none-any
|
||||
|
@ -0,0 +1 @@
|
||||
flask_session
|
@ -0,0 +1 @@
|
||||
pip
|
201
.env/lib/python3.10/site-packages/bcrypt-4.0.1.dist-info/LICENSE
Normal file
201
.env/lib/python3.10/site-packages/bcrypt-4.0.1.dist-info/LICENSE
Normal file
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
@ -0,0 +1,292 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: bcrypt
|
||||
Version: 4.0.1
|
||||
Summary: Modern password hashing for your software and your servers
|
||||
Home-page: https://github.com/pyca/bcrypt/
|
||||
Author: The Python Cryptographic Authority developers
|
||||
Author-email: cryptography-dev@python.org
|
||||
License: Apache License, Version 2.0
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: License :: OSI Approved :: Apache Software License
|
||||
Classifier: Programming Language :: Python :: Implementation :: CPython
|
||||
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3 :: Only
|
||||
Classifier: Programming Language :: Python :: 3.6
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: 3.9
|
||||
Classifier: Programming Language :: Python :: 3.10
|
||||
Requires-Python: >=3.6
|
||||
Description-Content-Type: text/x-rst
|
||||
License-File: LICENSE
|
||||
Provides-Extra: tests
|
||||
Requires-Dist: pytest (!=3.3.0,>=3.2.1) ; extra == 'tests'
|
||||
Provides-Extra: typecheck
|
||||
Requires-Dist: mypy ; extra == 'typecheck'
|
||||
|
||||
bcrypt
|
||||
======
|
||||
|
||||
.. image:: https://img.shields.io/pypi/v/bcrypt.svg
|
||||
:target: https://pypi.org/project/bcrypt/
|
||||
:alt: Latest Version
|
||||
|
||||
.. image:: https://github.com/pyca/bcrypt/workflows/CI/badge.svg?branch=main
|
||||
:target: https://github.com/pyca/bcrypt/actions?query=workflow%3ACI+branch%3Amain
|
||||
|
||||
Acceptable password hashing for your software and your servers (but you should
|
||||
really use argon2id or scrypt)
|
||||
|
||||
|
||||
Installation
|
||||
============
|
||||
|
||||
To install bcrypt, simply:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ pip install bcrypt
|
||||
|
||||
Note that bcrypt should build very easily on Linux provided you have a C
|
||||
compiler and a Rust compiler (the minimum supported Rust version is 1.56.0).
|
||||
|
||||
For Debian and Ubuntu, the following command will ensure that the required dependencies are installed:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ sudo apt-get install build-essential cargo
|
||||
|
||||
For Fedora and RHEL-derivatives, the following command will ensure that the required dependencies are installed:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ sudo yum install gcc cargo
|
||||
|
||||
For Alpine, the following command will ensure that the required dependencies are installed:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
$ apk add --update musl-dev gcc cargo
|
||||
|
||||
|
||||
Alternatives
|
||||
============
|
||||
|
||||
While bcrypt remains an acceptable choice for password storage, depending on your specific use case you may also want to consider using scrypt (either via `standard library`_ or `cryptography`_) or argon2id via `argon2_cffi`_.
|
||||
|
||||
Changelog
|
||||
=========
|
||||
|
||||
4.0.1
|
||||
-----
|
||||
|
||||
* We now build PyPy ``manylinux`` wheels.
|
||||
* Fixed a bug where passing an invalid ``salt`` to ``checkpw`` could result in
|
||||
a ``pyo3_runtime.PanicException``. It now correctly raises a ``ValueError``.
|
||||
|
||||
4.0.0
|
||||
-----
|
||||
|
||||
* ``bcrypt`` is now implemented in Rust. Users building from source will need
|
||||
to have a Rust compiler available. Nothing will change for users downloading
|
||||
wheels.
|
||||
* We no longer ship ``manylinux2010`` wheels. Users should upgrade to the latest
|
||||
``pip`` to ensure this doesn’t cause issues downloading wheels on their
|
||||
platform. We now ship ``manylinux_2_28`` wheels for users on new enough platforms.
|
||||
* ``NUL`` bytes are now allowed in inputs.
|
||||
|
||||
|
||||
3.2.2
|
||||
-----
|
||||
|
||||
* Fixed packaging of ``py.typed`` files in wheels so that ``mypy`` works.
|
||||
|
||||
3.2.1
|
||||
-----
|
||||
|
||||
* Added support for compilation on z/OS
|
||||
* The next release of ``bcrypt`` with be 4.0 and it will require Rust at
|
||||
compile time, for users building from source. There will be no additional
|
||||
requirement for users who are installing from wheels. Users on most
|
||||
platforms will be able to obtain a wheel by making sure they have an up to
|
||||
date ``pip``. The minimum supported Rust version will be 1.56.0.
|
||||
* This will be the final release for which we ship ``manylinux2010`` wheels.
|
||||
Going forward the minimum supported manylinux ABI for our wheels will be
|
||||
``manylinux2014``. The vast majority of users will continue to receive
|
||||
``manylinux`` wheels provided they have an up to date ``pip``.
|
||||
|
||||
|
||||
3.2.0
|
||||
-----
|
||||
|
||||
* Added typehints for library functions.
|
||||
* Dropped support for Python versions less than 3.6 (2.7, 3.4, 3.5).
|
||||
* Shipped ``abi3`` Windows wheels (requires pip >= 20).
|
||||
|
||||
3.1.7
|
||||
-----
|
||||
|
||||
* Set a ``setuptools`` lower bound for PEP517 wheel building.
|
||||
* We no longer distribute 32-bit ``manylinux1`` wheels. Continuing to produce
|
||||
them was a maintenance burden.
|
||||
|
||||
3.1.6
|
||||
-----
|
||||
|
||||
* Added support for compilation on Haiku.
|
||||
|
||||
3.1.5
|
||||
-----
|
||||
|
||||
* Added support for compilation on AIX.
|
||||
* Dropped Python 2.6 and 3.3 support.
|
||||
* Switched to using ``abi3`` wheels for Python 3. If you are not getting a
|
||||
wheel on a compatible platform please upgrade your ``pip`` version.
|
||||
|
||||
3.1.4
|
||||
-----
|
||||
|
||||
* Fixed compilation with mingw and on illumos.
|
||||
|
||||
3.1.3
|
||||
-----
|
||||
* Fixed a compilation issue on Solaris.
|
||||
* Added a warning when using too few rounds with ``kdf``.
|
||||
|
||||
3.1.2
|
||||
-----
|
||||
* Fixed a compile issue affecting big endian platforms.
|
||||
* Fixed invalid escape sequence warnings on Python 3.6.
|
||||
* Fixed building in non-UTF8 environments on Python 2.
|
||||
|
||||
3.1.1
|
||||
-----
|
||||
* Resolved a ``UserWarning`` when used with ``cffi`` 1.8.3.
|
||||
|
||||
3.1.0
|
||||
-----
|
||||
* Added support for ``checkpw``, a convenience method for verifying a password.
|
||||
* Ensure that you get a ``$2y$`` hash when you input a ``$2y$`` salt.
|
||||
* Fixed a regression where ``$2a`` hashes were vulnerable to a wraparound bug.
|
||||
* Fixed compilation under Alpine Linux.
|
||||
|
||||
3.0.0
|
||||
-----
|
||||
* Switched the C backend to code obtained from the OpenBSD project rather than
|
||||
openwall.
|
||||
* Added support for ``bcrypt_pbkdf`` via the ``kdf`` function.
|
||||
|
||||
2.0.0
|
||||
-----
|
||||
* Added support for an adjustible prefix when calling ``gensalt``.
|
||||
* Switched to CFFI 1.0+
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Password Hashing
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Hashing and then later checking that a password matches the previous hashed
|
||||
password is very simple:
|
||||
|
||||
.. code:: pycon
|
||||
|
||||
>>> import bcrypt
|
||||
>>> password = b"super secret password"
|
||||
>>> # Hash a password for the first time, with a randomly-generated salt
|
||||
>>> hashed = bcrypt.hashpw(password, bcrypt.gensalt())
|
||||
>>> # Check that an unhashed password matches one that has previously been
|
||||
>>> # hashed
|
||||
>>> if bcrypt.checkpw(password, hashed):
|
||||
... print("It Matches!")
|
||||
... else:
|
||||
... print("It Does not Match :(")
|
||||
|
||||
KDF
|
||||
~~~
|
||||
|
||||
As of 3.0.0 ``bcrypt`` now offers a ``kdf`` function which does ``bcrypt_pbkdf``.
|
||||
This KDF is used in OpenSSH's newer encrypted private key format.
|
||||
|
||||
.. code:: pycon
|
||||
|
||||
>>> import bcrypt
|
||||
>>> key = bcrypt.kdf(
|
||||
... password=b'password',
|
||||
... salt=b'salt',
|
||||
... desired_key_bytes=32,
|
||||
... rounds=100)
|
||||
|
||||
|
||||
Adjustable Work Factor
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
One of bcrypt's features is an adjustable logarithmic work factor. To adjust
|
||||
the work factor merely pass the desired number of rounds to
|
||||
``bcrypt.gensalt(rounds=12)`` which defaults to 12):
|
||||
|
||||
.. code:: pycon
|
||||
|
||||
>>> import bcrypt
|
||||
>>> password = b"super secret password"
|
||||
>>> # Hash a password for the first time, with a certain number of rounds
|
||||
>>> hashed = bcrypt.hashpw(password, bcrypt.gensalt(14))
|
||||
>>> # Check that a unhashed password matches one that has previously been
|
||||
>>> # hashed
|
||||
>>> if bcrypt.checkpw(password, hashed):
|
||||
... print("It Matches!")
|
||||
... else:
|
||||
... print("It Does not Match :(")
|
||||
|
||||
|
||||
Adjustable Prefix
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Another one of bcrypt's features is an adjustable prefix to let you define what
|
||||
libraries you'll remain compatible with. To adjust this, pass either ``2a`` or
|
||||
``2b`` (the default) to ``bcrypt.gensalt(prefix=b"2b")`` as a bytes object.
|
||||
|
||||
As of 3.0.0 the ``$2y$`` prefix is still supported in ``hashpw`` but deprecated.
|
||||
|
||||
Maximum Password Length
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The bcrypt algorithm only handles passwords up to 72 characters, any characters
|
||||
beyond that are ignored. To work around this, a common approach is to hash a
|
||||
password with a cryptographic hash (such as ``sha256``) and then base64
|
||||
encode it to prevent NULL byte problems before hashing the result with
|
||||
``bcrypt``:
|
||||
|
||||
.. code:: pycon
|
||||
|
||||
>>> password = b"an incredibly long password" * 10
|
||||
>>> hashed = bcrypt.hashpw(
|
||||
... base64.b64encode(hashlib.sha256(password).digest()),
|
||||
... bcrypt.gensalt()
|
||||
... )
|
||||
|
||||
Compatibility
|
||||
-------------
|
||||
|
||||
This library should be compatible with py-bcrypt and it will run on Python
|
||||
3.6+, and PyPy 3.
|
||||
|
||||
C Code
|
||||
------
|
||||
|
||||
This library uses code from OpenBSD.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
``bcrypt`` follows the `same security policy as cryptography`_, if you
|
||||
identify a vulnerability, we ask you to contact us privately.
|
||||
|
||||
.. _`same security policy as cryptography`: https://cryptography.io/en/latest/security.html
|
||||
.. _`standard library`: https://docs.python.org/3/library/hashlib.html#hashlib.scrypt
|
||||
.. _`argon2_cffi`: https://argon2-cffi.readthedocs.io
|
||||
.. _`cryptography`: https://cryptography.io/en/latest/hazmat/primitives/key-derivation-functions/#cryptography.hazmat.primitives.kdf.scrypt.Scrypt
|
||||
|
||||
|
@ -0,0 +1,14 @@
|
||||
bcrypt-4.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
bcrypt-4.0.1.dist-info/LICENSE,sha256=gXPVwptPlW1TJ4HSuG5OMPg-a3h43OGMkZRR1rpwfJA,10850
|
||||
bcrypt-4.0.1.dist-info/METADATA,sha256=peZwWFa95xnpp4NiIE7gJkV01CTkbVXIzoEN66SXd3c,8972
|
||||
bcrypt-4.0.1.dist-info/RECORD,,
|
||||
bcrypt-4.0.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
bcrypt-4.0.1.dist-info/WHEEL,sha256=ZXaM-AC_dnzk1sUAdQV_bMrIMG6zI-GthFaEkNkWsgU,112
|
||||
bcrypt-4.0.1.dist-info/top_level.txt,sha256=BkR_qBzDbSuycMzHWE1vzXrfYecAzUVmQs6G2CukqNI,7
|
||||
bcrypt/__about__.py,sha256=F7i0CQOa8G3Yjw1T71jQv8yi__Z_4TzLyZJv1GFqVx0,1320
|
||||
bcrypt/__init__.py,sha256=EpUdbfHaiHlSoaM-SSUB6MOgNpWOIkS0ZrjxogPIRLM,3781
|
||||
bcrypt/__pycache__/__about__.cpython-310.pyc,,
|
||||
bcrypt/__pycache__/__init__.cpython-310.pyc,,
|
||||
bcrypt/_bcrypt.abi3.so,sha256=_T-y5IrekziUzkYio4hWH7Xzw92XBKewSLd8kmERhGU,1959696
|
||||
bcrypt/_bcrypt.pyi,sha256=O-vvHdooGyAxIkdKemVqOzBF5aMhh0evPSaDMgETgEk,214
|
||||
bcrypt/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
@ -0,0 +1,5 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.37.1)
|
||||
Root-Is-Purelib: false
|
||||
Tag: cp36-abi3-manylinux_2_28_x86_64
|
||||
|
@ -0,0 +1 @@
|
||||
bcrypt
|
41
.env/lib/python3.10/site-packages/bcrypt/__about__.py
Normal file
41
.env/lib/python3.10/site-packages/bcrypt/__about__.py
Normal file
@ -0,0 +1,41 @@
|
||||
# Author:: Donald Stufft (<donald@stufft.io>)
|
||||
# Copyright:: Copyright (c) 2013 Donald Stufft
|
||||
# License:: Apache License, Version 2.0
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__all__ = [
|
||||
"__title__",
|
||||
"__summary__",
|
||||
"__uri__",
|
||||
"__version__",
|
||||
"__author__",
|
||||
"__email__",
|
||||
"__license__",
|
||||
"__copyright__",
|
||||
]
|
||||
|
||||
__title__ = "bcrypt"
|
||||
__summary__ = "Modern password hashing for your software and your servers"
|
||||
__uri__ = "https://github.com/pyca/bcrypt/"
|
||||
|
||||
__version__ = "4.0.1"
|
||||
|
||||
__author__ = "The Python Cryptographic Authority developers"
|
||||
__email__ = "cryptography-dev@python.org"
|
||||
|
||||
__license__ = "Apache License, Version 2.0"
|
||||
__copyright__ = "Copyright 2013-2022 {0}".format(__author__)
|
127
.env/lib/python3.10/site-packages/bcrypt/__init__.py
Normal file
127
.env/lib/python3.10/site-packages/bcrypt/__init__.py
Normal file
@ -0,0 +1,127 @@
|
||||
# Author:: Donald Stufft (<donald@stufft.io>)
|
||||
# Copyright:: Copyright (c) 2013 Donald Stufft
|
||||
# License:: Apache License, Version 2.0
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from __future__ import absolute_import
|
||||
from __future__ import division
|
||||
|
||||
import hmac
|
||||
import os
|
||||
import warnings
|
||||
|
||||
from .__about__ import (
|
||||
__author__,
|
||||
__copyright__,
|
||||
__email__,
|
||||
__license__,
|
||||
__summary__,
|
||||
__title__,
|
||||
__uri__,
|
||||
__version__,
|
||||
)
|
||||
from . import _bcrypt # noqa: I100
|
||||
|
||||
|
||||
__all__ = [
|
||||
"__title__",
|
||||
"__summary__",
|
||||
"__uri__",
|
||||
"__version__",
|
||||
"__author__",
|
||||
"__email__",
|
||||
"__license__",
|
||||
"__copyright__",
|
||||
"gensalt",
|
||||
"hashpw",
|
||||
"kdf",
|
||||
"checkpw",
|
||||
]
|
||||
|
||||
|
||||
def gensalt(rounds: int = 12, prefix: bytes = b"2b") -> bytes:
|
||||
if prefix not in (b"2a", b"2b"):
|
||||
raise ValueError("Supported prefixes are b'2a' or b'2b'")
|
||||
|
||||
if rounds < 4 or rounds > 31:
|
||||
raise ValueError("Invalid rounds")
|
||||
|
||||
salt = os.urandom(16)
|
||||
output = _bcrypt.encode_base64(salt)
|
||||
|
||||
return (
|
||||
b"$"
|
||||
+ prefix
|
||||
+ b"$"
|
||||
+ ("%2.2u" % rounds).encode("ascii")
|
||||
+ b"$"
|
||||
+ output
|
||||
)
|
||||
|
||||
|
||||
def hashpw(password: bytes, salt: bytes) -> bytes:
|
||||
if isinstance(password, str) or isinstance(salt, str):
|
||||
raise TypeError("Strings must be encoded before hashing")
|
||||
|
||||
# bcrypt originally suffered from a wraparound bug:
|
||||
# http://www.openwall.com/lists/oss-security/2012/01/02/4
|
||||
# This bug was corrected in the OpenBSD source by truncating inputs to 72
|
||||
# bytes on the updated prefix $2b$, but leaving $2a$ unchanged for
|
||||
# compatibility. However, pyca/bcrypt 2.0.0 *did* correctly truncate inputs
|
||||
# on $2a$, so we do it here to preserve compatibility with 2.0.0
|
||||
password = password[:72]
|
||||
|
||||
return _bcrypt.hashpass(password, salt)
|
||||
|
||||
|
||||
def checkpw(password: bytes, hashed_password: bytes) -> bool:
|
||||
if isinstance(password, str) or isinstance(hashed_password, str):
|
||||
raise TypeError("Strings must be encoded before checking")
|
||||
|
||||
ret = hashpw(password, hashed_password)
|
||||
return hmac.compare_digest(ret, hashed_password)
|
||||
|
||||
|
||||
def kdf(
|
||||
password: bytes,
|
||||
salt: bytes,
|
||||
desired_key_bytes: int,
|
||||
rounds: int,
|
||||
ignore_few_rounds: bool = False,
|
||||
) -> bytes:
|
||||
if isinstance(password, str) or isinstance(salt, str):
|
||||
raise TypeError("Strings must be encoded before hashing")
|
||||
|
||||
if len(password) == 0 or len(salt) == 0:
|
||||
raise ValueError("password and salt must not be empty")
|
||||
|
||||
if desired_key_bytes <= 0 or desired_key_bytes > 512:
|
||||
raise ValueError("desired_key_bytes must be 1-512")
|
||||
|
||||
if rounds < 1:
|
||||
raise ValueError("rounds must be 1 or more")
|
||||
|
||||
if rounds < 50 and not ignore_few_rounds:
|
||||
# They probably think bcrypt.kdf()'s rounds parameter is logarithmic,
|
||||
# expecting this value to be slow enough (it probably would be if this
|
||||
# were bcrypt). Emit a warning.
|
||||
warnings.warn(
|
||||
(
|
||||
"Warning: bcrypt.kdf() called with only {0} round(s). "
|
||||
"This few is not secure: the parameter is linear, like PBKDF2."
|
||||
).format(rounds),
|
||||
UserWarning,
|
||||
stacklevel=2,
|
||||
)
|
||||
|
||||
return _bcrypt.pbkdf(password, salt, rounds, desired_key_bytes)
|
Binary file not shown.
Binary file not shown.
BIN
.env/lib/python3.10/site-packages/bcrypt/_bcrypt.abi3.so
Executable file
BIN
.env/lib/python3.10/site-packages/bcrypt/_bcrypt.abi3.so
Executable file
Binary file not shown.
7
.env/lib/python3.10/site-packages/bcrypt/_bcrypt.pyi
Normal file
7
.env/lib/python3.10/site-packages/bcrypt/_bcrypt.pyi
Normal file
@ -0,0 +1,7 @@
|
||||
import typing
|
||||
|
||||
def encode_base64(data: bytes) -> bytes: ...
|
||||
def hashpass(password: bytes, salt: bytes) -> bytes: ...
|
||||
def pbkdf(
|
||||
password: bytes, salt: bytes, rounds: int, desired_key_bytes: int
|
||||
) -> bytes: ...
|
0
.env/lib/python3.10/site-packages/bcrypt/py.typed
Normal file
0
.env/lib/python3.10/site-packages/bcrypt/py.typed
Normal file
@ -0,0 +1 @@
|
||||
pip
|
@ -0,0 +1,28 @@
|
||||
Copyright 2018 Pallets
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
|
||||
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
|
||||
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
|
||||
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
||||
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
@ -0,0 +1,67 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: cachelib
|
||||
Version: 0.10.2
|
||||
Summary: A collection of cache libraries in the same API interface.
|
||||
Home-page: https://github.com/pallets-eco/cachelib/
|
||||
Maintainer: Pallets
|
||||
Maintainer-email: contact@palletsprojects.com
|
||||
License: BSD-3-Clause
|
||||
Project-URL: Donate, https://palletsprojects.com/donate
|
||||
Project-URL: Documentation, https://cachelib.readthedocs.io/
|
||||
Project-URL: Changes, https://cachelib.readthedocs.io/changes/
|
||||
Project-URL: Source Code, https://github.com/pallets-eco/cachelib/
|
||||
Project-URL: Issue Tracker, https://github.com/pallets-eco/cachelib/issues/
|
||||
Project-URL: Twitter, https://twitter.com/PalletsTeam
|
||||
Project-URL: Chat, https://discord.gg/pallets
|
||||
Platform: UNKNOWN
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved :: BSD License
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python
|
||||
Requires-Python: >=3.7
|
||||
Description-Content-Type: text/x-rst
|
||||
License-File: LICENSE.rst
|
||||
|
||||
CacheLib
|
||||
========
|
||||
|
||||
A collection of cache libraries in the same API interface. Extracted
|
||||
from Werkzeug.
|
||||
|
||||
|
||||
Installing
|
||||
----------
|
||||
|
||||
Install and update using `pip`_:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
$ pip install -U cachelib
|
||||
|
||||
.. _pip: https://pip.pypa.io/en/stable/getting-started/
|
||||
|
||||
|
||||
Donate
|
||||
------
|
||||
|
||||
The Pallets organization develops and supports Flask and the libraries
|
||||
it uses. In order to grow the community of contributors and users, and
|
||||
allow the maintainers to devote more time to the projects, `please
|
||||
donate today`_.
|
||||
|
||||
.. _please donate today: https://palletsprojects.com/donate
|
||||
|
||||
|
||||
Links
|
||||
-----
|
||||
|
||||
- Documentation: https://cachelib.readthedocs.io/
|
||||
- Changes: https://cachelib.readthedocs.io/changes/
|
||||
- PyPI Releases: https://pypi.org/project/cachelib/
|
||||
- Source Code: https://github.com/pallets/cachelib/
|
||||
- Issue Tracker: https://github.com/pallets/cachelib/issues/
|
||||
- Twitter: https://twitter.com/PalletsTeam
|
||||
- Chat: https://discord.gg/pallets
|
||||
|
||||
|
@ -0,0 +1,25 @@
|
||||
cachelib-0.10.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
cachelib-0.10.2.dist-info/LICENSE.rst,sha256=zUGBIIEtwmJiga4CfoG2SCKdFmtaynRyzs1RADjTbn0,1475
|
||||
cachelib-0.10.2.dist-info/METADATA,sha256=Qggowi1hXrTDQ5jds9Ebxrk0l1CZIsLc_kBZfuLy0dw,1980
|
||||
cachelib-0.10.2.dist-info/RECORD,,
|
||||
cachelib-0.10.2.dist-info/WHEEL,sha256=00yskusixUoUt5ob_CiUp6LsnN5lqzTJpoqOFg_FVIc,92
|
||||
cachelib-0.10.2.dist-info/top_level.txt,sha256=AYC4q8wgGd_hR_F2YcDkmtQm41gv9-5AThKuQtNPEXk,9
|
||||
cachelib/__init__.py,sha256=nmWMemwO6P1zf9MbyI-YEAWupb0hawB2g0vkUGlVza0,513
|
||||
cachelib/__pycache__/__init__.cpython-310.pyc,,
|
||||
cachelib/__pycache__/base.cpython-310.pyc,,
|
||||
cachelib/__pycache__/dynamodb.cpython-310.pyc,,
|
||||
cachelib/__pycache__/file.cpython-310.pyc,,
|
||||
cachelib/__pycache__/memcached.cpython-310.pyc,,
|
||||
cachelib/__pycache__/redis.cpython-310.pyc,,
|
||||
cachelib/__pycache__/serializers.cpython-310.pyc,,
|
||||
cachelib/__pycache__/simple.cpython-310.pyc,,
|
||||
cachelib/__pycache__/uwsgi.cpython-310.pyc,,
|
||||
cachelib/base.py,sha256=HF06krAmni7ZIjM5oztpBzSULSbl-E5hDdzf511fhOQ,6727
|
||||
cachelib/dynamodb.py,sha256=w5kyLaC0UZ39H1DqVo-kK8o64JyKkrjclXyuny3OGxA,8513
|
||||
cachelib/file.py,sha256=U43TZ5M8D1VqRjBiEjOTOgmeqsSWp3Jj5Swt6UPGHvE,11736
|
||||
cachelib/memcached.py,sha256=eOv5vkA3HFHYTgInTwCkV0FQLbpx1lPWWmutPGjz6gk,7161
|
||||
cachelib/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
cachelib/redis.py,sha256=FJbBBOcm5nnAAxogOsmZuMaWksXG0LbMmgvAHSkoy3Q,5977
|
||||
cachelib/serializers.py,sha256=lbBcZICJE-jAnzM7XT3ZMAmFTwsh9JUzwsl78W5sGSM,3421
|
||||
cachelib/simple.py,sha256=q5j5WDwOFPdAgJI6wcj4LnFCaix3jUB0sDVuLO-wsWY,3481
|
||||
cachelib/uwsgi.py,sha256=4DX3C9QGvB6mVcg1d7qpLIEkI6bccuq-8M6I_YbPicY,2563
|
@ -0,0 +1,5 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: bdist_wheel (0.38.1)
|
||||
Root-Is-Purelib: true
|
||||
Tag: py3-none-any
|
||||
|
@ -0,0 +1 @@
|
||||
cachelib
|
20
.env/lib/python3.10/site-packages/cachelib/__init__.py
Normal file
20
.env/lib/python3.10/site-packages/cachelib/__init__.py
Normal file
@ -0,0 +1,20 @@
|
||||
from cachelib.base import BaseCache
|
||||
from cachelib.base import NullCache
|
||||
from cachelib.dynamodb import DynamoDbCache
|
||||
from cachelib.file import FileSystemCache
|
||||
from cachelib.memcached import MemcachedCache
|
||||
from cachelib.redis import RedisCache
|
||||
from cachelib.simple import SimpleCache
|
||||
from cachelib.uwsgi import UWSGICache
|
||||
|
||||
__all__ = [
|
||||
"BaseCache",
|
||||
"NullCache",
|
||||
"SimpleCache",
|
||||
"FileSystemCache",
|
||||
"MemcachedCache",
|
||||
"RedisCache",
|
||||
"UWSGICache",
|
||||
"DynamoDbCache",
|
||||
]
|
||||
__version__ = "0.10.2"
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
185
.env/lib/python3.10/site-packages/cachelib/base.py
Normal file
185
.env/lib/python3.10/site-packages/cachelib/base.py
Normal file
@ -0,0 +1,185 @@
|
||||
import typing as _t
|
||||
|
||||
|
||||
class BaseCache:
|
||||
"""Baseclass for the cache systems. All the cache systems implement this
|
||||
API or a superset of it.
|
||||
|
||||
:param default_timeout: the default timeout (in seconds) that is used if
|
||||
no timeout is specified on :meth:`set`. A timeout
|
||||
of 0 indicates that the cache never expires.
|
||||
"""
|
||||
|
||||
def __init__(self, default_timeout: int = 300):
|
||||
self.default_timeout = default_timeout
|
||||
|
||||
def _normalize_timeout(self, timeout: _t.Optional[int]) -> int:
|
||||
if timeout is None:
|
||||
timeout = self.default_timeout
|
||||
return timeout
|
||||
|
||||
def get(self, key: str) -> _t.Any:
|
||||
"""Look up key in the cache and return the value for it.
|
||||
|
||||
:param key: the key to be looked up.
|
||||
:returns: The value if it exists and is readable, else ``None``.
|
||||
"""
|
||||
return None
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
"""Delete `key` from the cache.
|
||||
|
||||
:param key: the key to delete.
|
||||
:returns: Whether the key existed and has been deleted.
|
||||
:rtype: boolean
|
||||
"""
|
||||
return True
|
||||
|
||||
def get_many(self, *keys: str) -> _t.List[_t.Any]:
|
||||
"""Returns a list of values for the given keys.
|
||||
For each key an item in the list is created::
|
||||
|
||||
foo, bar = cache.get_many("foo", "bar")
|
||||
|
||||
Has the same error handling as :meth:`get`.
|
||||
|
||||
:param keys: The function accepts multiple keys as positional
|
||||
arguments.
|
||||
"""
|
||||
return [self.get(k) for k in keys]
|
||||
|
||||
def get_dict(self, *keys: str) -> _t.Dict[str, _t.Any]:
|
||||
"""Like :meth:`get_many` but return a dict::
|
||||
|
||||
d = cache.get_dict("foo", "bar")
|
||||
foo = d["foo"]
|
||||
bar = d["bar"]
|
||||
|
||||
:param keys: The function accepts multiple keys as positional
|
||||
arguments.
|
||||
"""
|
||||
return dict(zip(keys, self.get_many(*keys))) # noqa: B905
|
||||
|
||||
def set(
|
||||
self, key: str, value: _t.Any, timeout: _t.Optional[int] = None
|
||||
) -> _t.Optional[bool]:
|
||||
"""Add a new key/value to the cache (overwrites value, if key already
|
||||
exists in the cache).
|
||||
|
||||
:param key: the key to set
|
||||
:param value: the value for the key
|
||||
:param timeout: the cache timeout for the key in seconds (if not
|
||||
specified, it uses the default timeout). A timeout of
|
||||
0 indicates that the cache never expires.
|
||||
:returns: ``True`` if key has been updated, ``False`` for backend
|
||||
errors. Pickling errors, however, will raise a subclass of
|
||||
``pickle.PickleError``.
|
||||
:rtype: boolean
|
||||
"""
|
||||
return True
|
||||
|
||||
def add(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> bool:
|
||||
"""Works like :meth:`set` but does not overwrite the values of already
|
||||
existing keys.
|
||||
|
||||
:param key: the key to set
|
||||
:param value: the value for the key
|
||||
:param timeout: the cache timeout for the key in seconds (if not
|
||||
specified, it uses the default timeout). A timeout of
|
||||
0 indicates that the cache never expires.
|
||||
:returns: Same as :meth:`set`, but also ``False`` for already
|
||||
existing keys.
|
||||
:rtype: boolean
|
||||
"""
|
||||
return True
|
||||
|
||||
def set_many(
|
||||
self, mapping: _t.Dict[str, _t.Any], timeout: _t.Optional[int] = None
|
||||
) -> _t.List[_t.Any]:
|
||||
"""Sets multiple keys and values from a mapping.
|
||||
|
||||
:param mapping: a mapping with the keys/values to set.
|
||||
:param timeout: the cache timeout for the key in seconds (if not
|
||||
specified, it uses the default timeout). A timeout of
|
||||
0 indicates that the cache never expires.
|
||||
:returns: A list containing all keys sucessfuly set
|
||||
:rtype: boolean
|
||||
"""
|
||||
set_keys = []
|
||||
for key, value in mapping.items():
|
||||
if self.set(key, value, timeout):
|
||||
set_keys.append(key)
|
||||
return set_keys
|
||||
|
||||
def delete_many(self, *keys: str) -> _t.List[_t.Any]:
|
||||
"""Deletes multiple keys at once.
|
||||
|
||||
:param keys: The function accepts multiple keys as positional
|
||||
arguments.
|
||||
:returns: A list containing all sucessfuly deleted keys
|
||||
:rtype: boolean
|
||||
"""
|
||||
deleted_keys = []
|
||||
for key in keys:
|
||||
if self.delete(key):
|
||||
deleted_keys.append(key)
|
||||
return deleted_keys
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
"""Checks if a key exists in the cache without returning it. This is a
|
||||
cheap operation that bypasses loading the actual data on the backend.
|
||||
|
||||
:param key: the key to check
|
||||
"""
|
||||
raise NotImplementedError(
|
||||
"%s doesn't have an efficient implementation of `has`. That "
|
||||
"means it is impossible to check whether a key exists without "
|
||||
"fully loading the key's data. Consider using `self.get` "
|
||||
"explicitly if you don't care about performance."
|
||||
)
|
||||
|
||||
def clear(self) -> bool:
|
||||
"""Clears the cache. Keep in mind that not all caches support
|
||||
completely clearing the cache.
|
||||
|
||||
:returns: Whether the cache has been cleared.
|
||||
:rtype: boolean
|
||||
"""
|
||||
return True
|
||||
|
||||
def inc(self, key: str, delta: int = 1) -> _t.Optional[int]:
|
||||
"""Increments the value of a key by `delta`. If the key does
|
||||
not yet exist it is initialized with `delta`.
|
||||
|
||||
For supporting caches this is an atomic operation.
|
||||
|
||||
:param key: the key to increment.
|
||||
:param delta: the delta to add.
|
||||
:returns: The new value or ``None`` for backend errors.
|
||||
"""
|
||||
value = (self.get(key) or 0) + delta
|
||||
return value if self.set(key, value) else None
|
||||
|
||||
def dec(self, key: str, delta: int = 1) -> _t.Optional[int]:
|
||||
"""Decrements the value of a key by `delta`. If the key does
|
||||
not yet exist it is initialized with `-delta`.
|
||||
|
||||
For supporting caches this is an atomic operation.
|
||||
|
||||
:param key: the key to increment.
|
||||
:param delta: the delta to subtract.
|
||||
:returns: The new value or `None` for backend errors.
|
||||
"""
|
||||
value = (self.get(key) or 0) - delta
|
||||
return value if self.set(key, value) else None
|
||||
|
||||
|
||||
class NullCache(BaseCache):
|
||||
"""A cache that doesn't cache. This can be useful for unit testing.
|
||||
|
||||
:param default_timeout: a dummy parameter that is ignored but exists
|
||||
for API compatibility with other caches.
|
||||
"""
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
return False
|
227
.env/lib/python3.10/site-packages/cachelib/dynamodb.py
Normal file
227
.env/lib/python3.10/site-packages/cachelib/dynamodb.py
Normal file
@ -0,0 +1,227 @@
|
||||
import datetime
|
||||
import typing as _t
|
||||
|
||||
from cachelib.base import BaseCache
|
||||
from cachelib.serializers import DynamoDbSerializer
|
||||
|
||||
CREATED_AT_FIELD = "created_at"
|
||||
RESPONSE_FIELD = "response"
|
||||
|
||||
|
||||
class DynamoDbCache(BaseCache):
|
||||
"""
|
||||
Implementation of cachelib.BaseCache that uses an AWS DynamoDb table
|
||||
as the backend.
|
||||
|
||||
Your server process will require dynamodb:GetItem and dynamodb:PutItem
|
||||
IAM permissions on the cache table.
|
||||
|
||||
Limitations: DynamoDB table items are limited to 400 KB in size. Since
|
||||
this class stores cached items in a table, the max size of a cache entry
|
||||
will be slightly less than 400 KB, since the cache key and expiration
|
||||
time fields are also part of the item.
|
||||
|
||||
:param table_name: The name of the DynamoDB table to use
|
||||
:param default_timeout: Set the timeout in seconds after which cache entries
|
||||
expire
|
||||
:param key_field: The name of the hash_key attribute in the DynamoDb
|
||||
table. This must be a string attribute.
|
||||
:param expiration_time_field: The name of the table attribute to store the
|
||||
expiration time in. This will be an int
|
||||
attribute. The timestamp will be stored as
|
||||
seconds past the epoch. If you configure
|
||||
this as the TTL field, then DynamoDB will
|
||||
automatically delete expired entries.
|
||||
:param key_prefix: A prefix that should be added to all keys.
|
||||
|
||||
"""
|
||||
|
||||
serializer = DynamoDbSerializer()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
table_name: _t.Optional[str] = "python-cache",
|
||||
default_timeout: int = 300,
|
||||
key_field: _t.Optional[str] = "cache_key",
|
||||
expiration_time_field: _t.Optional[str] = "expiration_time",
|
||||
key_prefix: _t.Optional[str] = None,
|
||||
**kwargs: _t.Any
|
||||
):
|
||||
super().__init__(default_timeout)
|
||||
|
||||
try:
|
||||
import boto3 # type: ignore
|
||||
except ImportError as err:
|
||||
raise RuntimeError("no boto3 module found") from err
|
||||
|
||||
self._table_name = table_name
|
||||
self._key_field = key_field
|
||||
self._expiration_time_field = expiration_time_field
|
||||
self.key_prefix = key_prefix or ""
|
||||
self._dynamo = boto3.resource("dynamodb", **kwargs)
|
||||
self._attr = boto3.dynamodb.conditions.Attr
|
||||
|
||||
try:
|
||||
self._table = self._dynamo.Table(table_name)
|
||||
self._table.load()
|
||||
# catch this exception (triggered if the table doesn't exist)
|
||||
except Exception:
|
||||
table = self._dynamo.create_table(
|
||||
AttributeDefinitions=[
|
||||
{"AttributeName": key_field, "AttributeType": "S"}
|
||||
],
|
||||
TableName=table_name,
|
||||
KeySchema=[
|
||||
{"AttributeName": key_field, "KeyType": "HASH"},
|
||||
],
|
||||
BillingMode="PAY_PER_REQUEST",
|
||||
)
|
||||
table.wait_until_exists()
|
||||
dynamo = boto3.client("dynamodb", **kwargs)
|
||||
dynamo.update_time_to_live(
|
||||
TableName=table_name,
|
||||
TimeToLiveSpecification={
|
||||
"Enabled": True,
|
||||
"AttributeName": expiration_time_field,
|
||||
},
|
||||
)
|
||||
self._table = self._dynamo.Table(table_name)
|
||||
self._table.load()
|
||||
|
||||
def _utcnow(self) -> _t.Any:
|
||||
"""Return a tz-aware UTC datetime representing the current time"""
|
||||
return datetime.datetime.utcnow().replace(tzinfo=datetime.timezone.utc)
|
||||
|
||||
def _get_item(self, key: str, attributes: _t.Optional[list] = None) -> _t.Any:
|
||||
"""
|
||||
Get an item from the cache table, optionally limiting the returned
|
||||
attributes.
|
||||
|
||||
:param key: The cache key of the item to fetch
|
||||
|
||||
:param attributes: An optional list of attributes to fetch. If not
|
||||
given, all attributes are fetched. The
|
||||
expiration_time field will always be added to the
|
||||
list of fetched attributes.
|
||||
:return: The table item for key if it exists and is not expired, else
|
||||
None
|
||||
"""
|
||||
kwargs = {}
|
||||
if attributes:
|
||||
if self._expiration_time_field not in attributes:
|
||||
attributes = list(attributes) + [self._expiration_time_field]
|
||||
kwargs = dict(ProjectionExpression=",".join(attributes))
|
||||
|
||||
response = self._table.get_item(Key={self._key_field: key}, **kwargs)
|
||||
cache_item = response.get("Item")
|
||||
|
||||
if cache_item:
|
||||
now = int(self._utcnow().timestamp())
|
||||
if cache_item.get(self._expiration_time_field, now + 100) > now:
|
||||
return cache_item
|
||||
|
||||
return None
|
||||
|
||||
def get(self, key: str) -> _t.Any:
|
||||
"""
|
||||
Get a cache item
|
||||
|
||||
:param key: The cache key of the item to fetch
|
||||
:return: cache value if not expired, else None
|
||||
"""
|
||||
cache_item = self._get_item(self.key_prefix + key)
|
||||
if cache_item:
|
||||
response = cache_item[RESPONSE_FIELD]
|
||||
value = self.serializer.loads(response)
|
||||
return value
|
||||
return None
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
"""
|
||||
Deletes an item from the cache. This is a no-op if the item doesn't
|
||||
exist
|
||||
|
||||
:param key: Key of the item to delete.
|
||||
:return: True if the key existed and was deleted
|
||||
"""
|
||||
try:
|
||||
|
||||
self._table.delete_item(
|
||||
Key={self._key_field: self.key_prefix + key},
|
||||
ConditionExpression=self._attr(self._key_field).exists(),
|
||||
)
|
||||
return True
|
||||
except self._dynamo.meta.client.exceptions.ConditionalCheckFailedException:
|
||||
return False
|
||||
|
||||
def _set(
|
||||
self,
|
||||
key: str,
|
||||
value: _t.Any,
|
||||
timeout: _t.Optional[int] = None,
|
||||
overwrite: _t.Optional[bool] = True,
|
||||
) -> _t.Any:
|
||||
"""
|
||||
Store a cache item, with the option to not overwrite existing items
|
||||
|
||||
:param key: Cache key to use
|
||||
:param value: a serializable object
|
||||
:param timeout: The timeout in seconds for the cached item, to override
|
||||
the default
|
||||
:param overwrite: If true, overwrite any existing cache item with key.
|
||||
If false, the new value will only be stored if no
|
||||
non-expired cache item exists with key.
|
||||
:return: True if the new item was stored.
|
||||
"""
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
now = self._utcnow()
|
||||
|
||||
kwargs = {}
|
||||
if not overwrite:
|
||||
# Cause the put to fail if a non-expired item with this key
|
||||
# already exists
|
||||
|
||||
cond = self._attr(self._key_field).not_exists() | self._attr(
|
||||
self._expiration_time_field
|
||||
).lte(int(now.timestamp()))
|
||||
kwargs = dict(ConditionExpression=cond)
|
||||
|
||||
try:
|
||||
dump = self.serializer.dumps(value)
|
||||
item = {
|
||||
self._key_field: key,
|
||||
CREATED_AT_FIELD: now.isoformat(),
|
||||
RESPONSE_FIELD: dump,
|
||||
}
|
||||
if timeout > 0:
|
||||
expiration_time = now + datetime.timedelta(seconds=timeout)
|
||||
item[self._expiration_time_field] = int(expiration_time.timestamp())
|
||||
self._table.put_item(Item=item, **kwargs)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def set(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> _t.Any:
|
||||
return self._set(self.key_prefix + key, value, timeout=timeout, overwrite=True)
|
||||
|
||||
def add(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> _t.Any:
|
||||
return self._set(self.key_prefix + key, value, timeout=timeout, overwrite=False)
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
return (
|
||||
self._get_item(self.key_prefix + key, [self._expiration_time_field])
|
||||
is not None
|
||||
)
|
||||
|
||||
def clear(self) -> bool:
|
||||
paginator = self._dynamo.meta.client.get_paginator("scan")
|
||||
pages = paginator.paginate(
|
||||
TableName=self._table_name, ProjectionExpression=self._key_field
|
||||
)
|
||||
|
||||
with self._table.batch_writer() as batch:
|
||||
for page in pages:
|
||||
for item in page["Items"]:
|
||||
batch.delete_item(Key=item)
|
||||
|
||||
return True
|
336
.env/lib/python3.10/site-packages/cachelib/file.py
Normal file
336
.env/lib/python3.10/site-packages/cachelib/file.py
Normal file
@ -0,0 +1,336 @@
|
||||
import errno
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import stat
|
||||
import struct
|
||||
import tempfile
|
||||
import typing as _t
|
||||
from contextlib import contextmanager
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
from time import sleep
|
||||
from time import time
|
||||
|
||||
from cachelib.base import BaseCache
|
||||
from cachelib.serializers import FileSystemSerializer
|
||||
|
||||
|
||||
class FileSystemCache(BaseCache):
|
||||
"""A cache that stores the items on the file system. This cache depends
|
||||
on being the only user of the `cache_dir`. Make absolutely sure that
|
||||
nobody but this cache stores files there or otherwise the cache will
|
||||
randomly delete files therein.
|
||||
|
||||
:param cache_dir: the directory where cache files are stored.
|
||||
:param threshold: the maximum number of items the cache stores before
|
||||
it starts deleting some. A threshold value of 0
|
||||
indicates no threshold.
|
||||
:param default_timeout: the default timeout that is used if no timeout is
|
||||
specified on :meth:`~BaseCache.set`. A timeout of
|
||||
0 indicates that the cache never expires.
|
||||
:param mode: the file mode wanted for the cache files, default 0600
|
||||
:param hash_method: Default hashlib.md5. The hash method used to
|
||||
generate the filename for cached results.
|
||||
"""
|
||||
|
||||
#: used for temporary files by the FileSystemCache
|
||||
_fs_transaction_suffix = ".__wz_cache"
|
||||
#: keep amount of files in a cache element
|
||||
_fs_count_file = "__wz_cache_count"
|
||||
|
||||
serializer = FileSystemSerializer()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
cache_dir: str,
|
||||
threshold: int = 500,
|
||||
default_timeout: int = 300,
|
||||
mode: _t.Optional[int] = None,
|
||||
hash_method: _t.Any = md5,
|
||||
):
|
||||
BaseCache.__init__(self, default_timeout)
|
||||
self._path = cache_dir
|
||||
self._threshold = threshold
|
||||
self._hash_method = hash_method
|
||||
|
||||
# Mode set by user takes precedence. If no mode has
|
||||
# been given, we need to set the correct default based
|
||||
# on user platform.
|
||||
self._mode = mode
|
||||
if self._mode is None:
|
||||
self._mode = self._get_compatible_platform_mode()
|
||||
|
||||
try:
|
||||
os.makedirs(self._path)
|
||||
except OSError as ex:
|
||||
if ex.errno != errno.EEXIST:
|
||||
raise
|
||||
|
||||
# If there are many files and a zero threshold,
|
||||
# the list_dir can slow initialisation massively
|
||||
if self._threshold != 0:
|
||||
self._update_count(value=len(list(self._list_dir())))
|
||||
|
||||
def _get_compatible_platform_mode(self) -> int:
|
||||
mode = 0o600 # nix systems
|
||||
if platform.system() == "Windows":
|
||||
mode = stat.S_IWRITE
|
||||
return mode
|
||||
|
||||
@property
|
||||
def _file_count(self) -> int:
|
||||
return self.get(self._fs_count_file) or 0
|
||||
|
||||
def _update_count(
|
||||
self, delta: _t.Optional[int] = None, value: _t.Optional[int] = None
|
||||
) -> None:
|
||||
# If we have no threshold, don't count files
|
||||
if self._threshold == 0:
|
||||
return
|
||||
if delta:
|
||||
new_count = self._file_count + delta
|
||||
else:
|
||||
new_count = value or 0
|
||||
self.set(self._fs_count_file, new_count, mgmt_element=True)
|
||||
|
||||
def _normalize_timeout(self, timeout: _t.Optional[int]) -> int:
|
||||
timeout = BaseCache._normalize_timeout(self, timeout)
|
||||
if timeout != 0:
|
||||
timeout = int(time()) + timeout
|
||||
return int(timeout)
|
||||
|
||||
def _is_mgmt(self, name: str) -> bool:
|
||||
fshash = self._get_filename(self._fs_count_file).split(os.sep)[-1]
|
||||
return name == fshash or name.endswith(self._fs_transaction_suffix)
|
||||
|
||||
def _list_dir(self) -> _t.Generator[str, None, None]:
|
||||
"""return a list of (fully qualified) cache filenames"""
|
||||
return (
|
||||
os.path.join(self._path, fn)
|
||||
for fn in os.listdir(self._path)
|
||||
if not self._is_mgmt(fn)
|
||||
)
|
||||
|
||||
def _over_threshold(self) -> bool:
|
||||
return self._threshold != 0 and self._file_count > self._threshold
|
||||
|
||||
def _remove_expired(self, now: float) -> None:
|
||||
for fname in self._list_dir():
|
||||
try:
|
||||
with self._safe_stream_open(fname, "rb") as f:
|
||||
expires = struct.unpack("I", f.read(4))[0]
|
||||
if expires != 0 and expires < now:
|
||||
os.remove(fname)
|
||||
self._update_count(delta=-1)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except (OSError, EOFError, struct.error):
|
||||
logging.warning(
|
||||
"Exception raised while handling cache file '%s'",
|
||||
fname,
|
||||
exc_info=True,
|
||||
)
|
||||
|
||||
def _remove_older(self) -> bool:
|
||||
exp_fname_tuples = []
|
||||
for fname in self._list_dir():
|
||||
try:
|
||||
with self._safe_stream_open(fname, "rb") as f:
|
||||
timestamp = struct.unpack("I", f.read(4))[0]
|
||||
exp_fname_tuples.append((timestamp, fname))
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except (OSError, EOFError, struct.error):
|
||||
logging.warning(
|
||||
"Exception raised while handling cache file '%s'",
|
||||
fname,
|
||||
exc_info=True,
|
||||
)
|
||||
fname_sorted = (
|
||||
fname
|
||||
for _, fname in sorted(
|
||||
exp_fname_tuples, key=lambda item: item[0] # type: ignore
|
||||
)
|
||||
)
|
||||
for fname in fname_sorted:
|
||||
try:
|
||||
os.remove(fname)
|
||||
self._update_count(delta=-1)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except OSError:
|
||||
logging.warning(
|
||||
"Exception raised while handling cache file '%s'",
|
||||
fname,
|
||||
exc_info=True,
|
||||
)
|
||||
return False
|
||||
if not self._over_threshold():
|
||||
break
|
||||
return True
|
||||
|
||||
def _prune(self) -> None:
|
||||
if self._over_threshold():
|
||||
now = time()
|
||||
self._remove_expired(now)
|
||||
# if still over threshold
|
||||
if self._over_threshold():
|
||||
self._remove_older()
|
||||
|
||||
def clear(self) -> bool:
|
||||
for i, fname in enumerate(self._list_dir()):
|
||||
try:
|
||||
os.remove(fname)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except OSError:
|
||||
logging.warning(
|
||||
"Exception raised while handling cache file '%s'",
|
||||
fname,
|
||||
exc_info=True,
|
||||
)
|
||||
self._update_count(delta=-i)
|
||||
return False
|
||||
self._update_count(value=0)
|
||||
return True
|
||||
|
||||
def _get_filename(self, key: str) -> str:
|
||||
if isinstance(key, str):
|
||||
bkey = key.encode("utf-8") # XXX unicode review
|
||||
bkey_hash = self._hash_method(bkey).hexdigest()
|
||||
else:
|
||||
raise TypeError(f"Key must be a string, received type {type(key)}")
|
||||
return os.path.join(self._path, bkey_hash)
|
||||
|
||||
def get(self, key: str) -> _t.Any:
|
||||
filename = self._get_filename(key)
|
||||
try:
|
||||
with self._safe_stream_open(filename, "rb") as f:
|
||||
pickle_time = struct.unpack("I", f.read(4))[0]
|
||||
if pickle_time == 0 or pickle_time >= time():
|
||||
return self.serializer.load(f)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
except (OSError, EOFError, struct.error):
|
||||
logging.warning(
|
||||
"Exception raised while handling cache file '%s'",
|
||||
filename,
|
||||
exc_info=True,
|
||||
)
|
||||
return None
|
||||
|
||||
def add(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> bool:
|
||||
filename = self._get_filename(key)
|
||||
if not os.path.exists(filename):
|
||||
return self.set(key, value, timeout)
|
||||
return False
|
||||
|
||||
def set(
|
||||
self,
|
||||
key: str,
|
||||
value: _t.Any,
|
||||
timeout: _t.Optional[int] = None,
|
||||
mgmt_element: bool = False,
|
||||
) -> bool:
|
||||
# Management elements have no timeout
|
||||
if mgmt_element:
|
||||
timeout = 0
|
||||
# Don't prune on management element update, to avoid loop
|
||||
else:
|
||||
self._prune()
|
||||
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
filename = self._get_filename(key)
|
||||
overwrite = os.path.isfile(filename)
|
||||
|
||||
try:
|
||||
fd, tmp = tempfile.mkstemp(
|
||||
suffix=self._fs_transaction_suffix, dir=self._path
|
||||
)
|
||||
with os.fdopen(fd, "wb") as f:
|
||||
f.write(struct.pack("I", timeout))
|
||||
self.serializer.dump(value, f)
|
||||
|
||||
self._run_safely(os.replace, tmp, filename)
|
||||
self._run_safely(os.chmod, filename, self._mode)
|
||||
|
||||
fsize = Path(filename).stat().st_size
|
||||
except OSError:
|
||||
logging.warning(
|
||||
"Exception raised while handling cache file '%s'",
|
||||
filename,
|
||||
exc_info=True,
|
||||
)
|
||||
return False
|
||||
else:
|
||||
# Management elements should not count towards threshold
|
||||
if not overwrite and not mgmt_element:
|
||||
self._update_count(delta=1)
|
||||
return fsize > 0 # function should fail if file is empty
|
||||
|
||||
def delete(self, key: str, mgmt_element: bool = False) -> bool:
|
||||
try:
|
||||
os.remove(self._get_filename(key))
|
||||
except FileNotFoundError: # if file doesn't exist we consider it deleted
|
||||
return True
|
||||
except OSError:
|
||||
logging.warning("Exception raised while handling cache file", exc_info=True)
|
||||
return False
|
||||
else:
|
||||
# Management elements should not count towards threshold
|
||||
if not mgmt_element:
|
||||
self._update_count(delta=-1)
|
||||
return True
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
filename = self._get_filename(key)
|
||||
try:
|
||||
with self._safe_stream_open(filename, "rb") as f:
|
||||
pickle_time = struct.unpack("I", f.read(4))[0]
|
||||
if pickle_time == 0 or pickle_time >= time():
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
except FileNotFoundError: # if there is no file there is no key
|
||||
return False
|
||||
except (OSError, EOFError, struct.error):
|
||||
logging.warning(
|
||||
"Exception raised while handling cache file '%s'",
|
||||
filename,
|
||||
exc_info=True,
|
||||
)
|
||||
return False
|
||||
|
||||
def _run_safely(self, fn: _t.Callable, *args: _t.Any, **kwargs: _t.Any) -> _t.Any:
|
||||
"""On Windows os.replace, os.chmod and open can yield
|
||||
permission errors if executed by two different processes."""
|
||||
if platform.system() == "Windows":
|
||||
output = None
|
||||
wait_step = 0.001
|
||||
max_sleep_time = 10.0
|
||||
total_sleep_time = 0.0
|
||||
|
||||
while total_sleep_time < max_sleep_time:
|
||||
try:
|
||||
output = fn(*args, **kwargs)
|
||||
except PermissionError:
|
||||
sleep(wait_step)
|
||||
total_sleep_time += wait_step
|
||||
wait_step *= 2
|
||||
else:
|
||||
break
|
||||
else:
|
||||
output = fn(*args, **kwargs)
|
||||
|
||||
return output
|
||||
|
||||
@contextmanager
|
||||
def _safe_stream_open(self, path: str, mode: str) -> _t.Generator:
|
||||
fs = self._run_safely(open, path, mode)
|
||||
if fs is None:
|
||||
raise OSError
|
||||
try:
|
||||
yield fs
|
||||
finally:
|
||||
fs.close()
|
197
.env/lib/python3.10/site-packages/cachelib/memcached.py
Normal file
197
.env/lib/python3.10/site-packages/cachelib/memcached.py
Normal file
@ -0,0 +1,197 @@
|
||||
import re
|
||||
import typing as _t
|
||||
from time import time
|
||||
|
||||
from cachelib.base import BaseCache
|
||||
|
||||
|
||||
_test_memcached_key = re.compile(r"[^\x00-\x21\xff]{1,250}$").match
|
||||
|
||||
|
||||
class MemcachedCache(BaseCache):
|
||||
|
||||
"""A cache that uses memcached as backend.
|
||||
|
||||
The first argument can either be an object that resembles the API of a
|
||||
:class:`memcache.Client` or a tuple/list of server addresses. In the
|
||||
event that a tuple/list is passed, Werkzeug tries to import the best
|
||||
available memcache library.
|
||||
|
||||
This cache looks into the following packages/modules to find bindings for
|
||||
memcached:
|
||||
|
||||
- ``pylibmc``
|
||||
- ``google.appengine.api.memcached``
|
||||
- ``memcached``
|
||||
- ``libmc``
|
||||
|
||||
Implementation notes: This cache backend works around some limitations in
|
||||
memcached to simplify the interface. For example unicode keys are encoded
|
||||
to utf-8 on the fly. Methods such as :meth:`~BaseCache.get_dict` return
|
||||
the keys in the same format as passed. Furthermore all get methods
|
||||
silently ignore key errors to not cause problems when untrusted user data
|
||||
is passed to the get methods which is often the case in web applications.
|
||||
|
||||
:param servers: a list or tuple of server addresses or alternatively
|
||||
a :class:`memcache.Client` or a compatible client.
|
||||
:param default_timeout: the default timeout that is used if no timeout is
|
||||
specified on :meth:`~BaseCache.set`. A timeout of
|
||||
0 indicates that the cache never expires.
|
||||
:param key_prefix: a prefix that is added before all keys. This makes it
|
||||
possible to use the same memcached server for different
|
||||
applications. Keep in mind that
|
||||
:meth:`~BaseCache.clear` will also clear keys with a
|
||||
different prefix.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
servers: _t.Any = None,
|
||||
default_timeout: int = 300,
|
||||
key_prefix: _t.Optional[str] = None,
|
||||
):
|
||||
BaseCache.__init__(self, default_timeout)
|
||||
if servers is None or isinstance(servers, (list, tuple)):
|
||||
if servers is None:
|
||||
servers = ["127.0.0.1:11211"]
|
||||
self._client = self.import_preferred_memcache_lib(servers)
|
||||
if self._client is None:
|
||||
raise RuntimeError("no memcache module found")
|
||||
else:
|
||||
# NOTE: servers is actually an already initialized memcache
|
||||
# client.
|
||||
self._client = servers
|
||||
|
||||
self.key_prefix = key_prefix
|
||||
|
||||
def _normalize_key(self, key: str) -> str:
|
||||
if self.key_prefix:
|
||||
key = self.key_prefix + key
|
||||
return key
|
||||
|
||||
def _normalize_timeout(self, timeout: _t.Optional[int]) -> int:
|
||||
timeout = BaseCache._normalize_timeout(self, timeout)
|
||||
if timeout > 0:
|
||||
timeout = int(time()) + timeout
|
||||
return timeout
|
||||
|
||||
def get(self, key: str) -> _t.Any:
|
||||
key = self._normalize_key(key)
|
||||
# memcached doesn't support keys longer than that. Because often
|
||||
# checks for so long keys can occur because it's tested from user
|
||||
# submitted data etc we fail silently for getting.
|
||||
if _test_memcached_key(key):
|
||||
return self._client.get(key)
|
||||
|
||||
def get_dict(self, *keys: str) -> _t.Dict[str, _t.Any]:
|
||||
key_mapping = {}
|
||||
for key in keys:
|
||||
encoded_key = self._normalize_key(key)
|
||||
if _test_memcached_key(key):
|
||||
key_mapping[encoded_key] = key
|
||||
_keys = list(key_mapping)
|
||||
d = rv = self._client.get_multi(_keys) # type: _t.Dict[str, _t.Any]
|
||||
if self.key_prefix:
|
||||
rv = {}
|
||||
for key, value in d.items():
|
||||
rv[key_mapping[key]] = value
|
||||
if len(rv) < len(keys):
|
||||
for key in keys:
|
||||
if key not in rv:
|
||||
rv[key] = None
|
||||
return rv
|
||||
|
||||
def add(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> bool:
|
||||
key = self._normalize_key(key)
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
return bool(self._client.add(key, value, timeout))
|
||||
|
||||
def set(
|
||||
self, key: str, value: _t.Any, timeout: _t.Optional[int] = None
|
||||
) -> _t.Optional[bool]:
|
||||
key = self._normalize_key(key)
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
return bool(self._client.set(key, value, timeout))
|
||||
|
||||
def get_many(self, *keys: str) -> _t.List[_t.Any]:
|
||||
d = self.get_dict(*keys)
|
||||
return [d[key] for key in keys]
|
||||
|
||||
def set_many(
|
||||
self, mapping: _t.Dict[str, _t.Any], timeout: _t.Optional[int] = None
|
||||
) -> _t.List[_t.Any]:
|
||||
new_mapping = {}
|
||||
for key, value in mapping.items():
|
||||
key = self._normalize_key(key)
|
||||
new_mapping[key] = value
|
||||
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
failed_keys = self._client.set_multi(
|
||||
new_mapping, timeout
|
||||
) # type: _t.List[_t.Any]
|
||||
k_normkey = zip(mapping.keys(), new_mapping.keys()) # noqa: B905
|
||||
return [k for k, nkey in k_normkey if nkey not in failed_keys]
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
key = self._normalize_key(key)
|
||||
if _test_memcached_key(key):
|
||||
return bool(self._client.delete(key))
|
||||
return False
|
||||
|
||||
def delete_many(self, *keys: str) -> _t.List[_t.Any]:
|
||||
new_keys = []
|
||||
for key in keys:
|
||||
key = self._normalize_key(key)
|
||||
if _test_memcached_key(key):
|
||||
new_keys.append(key)
|
||||
self._client.delete_multi(new_keys)
|
||||
return [k for k in new_keys if not self.has(k)]
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
key = self._normalize_key(key)
|
||||
if _test_memcached_key(key):
|
||||
return bool(self._client.append(key, ""))
|
||||
return False
|
||||
|
||||
def clear(self) -> bool:
|
||||
return bool(self._client.flush_all())
|
||||
|
||||
def inc(self, key: str, delta: int = 1) -> _t.Optional[int]:
|
||||
key = self._normalize_key(key)
|
||||
value = (self._client.get(key) or 0) + delta
|
||||
return value if self.set(key, value) else None
|
||||
|
||||
def dec(self, key: str, delta: int = 1) -> _t.Optional[int]:
|
||||
key = self._normalize_key(key)
|
||||
value = (self._client.get(key) or 0) - delta
|
||||
return value if self.set(key, value) else None
|
||||
|
||||
def import_preferred_memcache_lib(self, servers: _t.Any) -> _t.Any:
|
||||
"""Returns an initialized memcache client. Used by the constructor."""
|
||||
try:
|
||||
import pylibmc # type: ignore
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
return pylibmc.Client(servers)
|
||||
|
||||
try:
|
||||
from google.appengine.api import memcache # type: ignore
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
return memcache.Client()
|
||||
|
||||
try:
|
||||
import memcache # type: ignore
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
return memcache.Client(servers)
|
||||
|
||||
try:
|
||||
import libmc # type: ignore
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
return libmc.Client(servers)
|
0
.env/lib/python3.10/site-packages/cachelib/py.typed
Normal file
0
.env/lib/python3.10/site-packages/cachelib/py.typed
Normal file
149
.env/lib/python3.10/site-packages/cachelib/redis.py
Normal file
149
.env/lib/python3.10/site-packages/cachelib/redis.py
Normal file
@ -0,0 +1,149 @@
|
||||
import typing as _t
|
||||
|
||||
from cachelib.base import BaseCache
|
||||
from cachelib.serializers import RedisSerializer
|
||||
|
||||
|
||||
class RedisCache(BaseCache):
|
||||
"""Uses the Redis key-value store as a cache backend.
|
||||
|
||||
The first argument can be either a string denoting address of the Redis
|
||||
server or an object resembling an instance of a redis.Redis class.
|
||||
|
||||
Note: Python Redis API already takes care of encoding unicode strings on
|
||||
the fly.
|
||||
|
||||
:param host: address of the Redis server or an object which API is
|
||||
compatible with the official Python Redis client (redis-py).
|
||||
:param port: port number on which Redis server listens for connections.
|
||||
:param password: password authentication for the Redis server.
|
||||
:param db: db (zero-based numeric index) on Redis Server to connect.
|
||||
:param default_timeout: the default timeout that is used if no timeout is
|
||||
specified on :meth:`~BaseCache.set`. A timeout of
|
||||
0 indicates that the cache never expires.
|
||||
:param key_prefix: A prefix that should be added to all keys.
|
||||
|
||||
Any additional keyword arguments will be passed to ``redis.Redis``.
|
||||
"""
|
||||
|
||||
_read_client: _t.Any = None
|
||||
_write_client: _t.Any = None
|
||||
serializer = RedisSerializer()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
host: _t.Any = "localhost",
|
||||
port: int = 6379,
|
||||
password: _t.Optional[str] = None,
|
||||
db: int = 0,
|
||||
default_timeout: int = 300,
|
||||
key_prefix: _t.Optional[str] = None,
|
||||
**kwargs: _t.Any
|
||||
):
|
||||
BaseCache.__init__(self, default_timeout)
|
||||
if host is None:
|
||||
raise ValueError("RedisCache host parameter may not be None")
|
||||
if isinstance(host, str):
|
||||
try:
|
||||
import redis
|
||||
except ImportError as err:
|
||||
raise RuntimeError("no redis module found") from err
|
||||
if kwargs.get("decode_responses", None):
|
||||
raise ValueError("decode_responses is not supported by RedisCache.")
|
||||
self._write_client = self._read_client = redis.Redis(
|
||||
host=host, port=port, password=password, db=db, **kwargs
|
||||
)
|
||||
else:
|
||||
self._read_client = self._write_client = host
|
||||
self.key_prefix = key_prefix or ""
|
||||
|
||||
def _normalize_timeout(self, timeout: _t.Optional[int]) -> int:
|
||||
"""Normalize timeout by setting it to default of 300 if
|
||||
not defined (None) or -1 if explicitly set to zero.
|
||||
|
||||
:param timeout: timeout to normalize.
|
||||
"""
|
||||
timeout = BaseCache._normalize_timeout(self, timeout)
|
||||
if timeout == 0:
|
||||
timeout = -1
|
||||
return timeout
|
||||
|
||||
def get(self, key: str) -> _t.Any:
|
||||
return self.serializer.loads(self._read_client.get(self.key_prefix + key))
|
||||
|
||||
def get_many(self, *keys: str) -> _t.List[_t.Any]:
|
||||
if self.key_prefix:
|
||||
prefixed_keys = [self.key_prefix + key for key in keys]
|
||||
else:
|
||||
prefixed_keys = list(keys)
|
||||
return [self.serializer.loads(x) for x in self._read_client.mget(prefixed_keys)]
|
||||
|
||||
def set(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> _t.Any:
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
dump = self.serializer.dumps(value)
|
||||
if timeout == -1:
|
||||
result = self._write_client.set(name=self.key_prefix + key, value=dump)
|
||||
else:
|
||||
result = self._write_client.setex(
|
||||
name=self.key_prefix + key, value=dump, time=timeout
|
||||
)
|
||||
return result
|
||||
|
||||
def add(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> _t.Any:
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
dump = self.serializer.dumps(value)
|
||||
created = self._write_client.setnx(name=self.key_prefix + key, value=dump)
|
||||
# handle case where timeout is explicitly set to zero
|
||||
if created and timeout != -1:
|
||||
self._write_client.expire(name=self.key_prefix + key, time=timeout)
|
||||
return created
|
||||
|
||||
def set_many(
|
||||
self, mapping: _t.Dict[str, _t.Any], timeout: _t.Optional[int] = None
|
||||
) -> _t.List[_t.Any]:
|
||||
timeout = self._normalize_timeout(timeout)
|
||||
# Use transaction=False to batch without calling redis MULTI
|
||||
# which is not supported by twemproxy
|
||||
pipe = self._write_client.pipeline(transaction=False)
|
||||
|
||||
for key, value in mapping.items():
|
||||
dump = self.serializer.dumps(value)
|
||||
if timeout == -1:
|
||||
pipe.set(name=self.key_prefix + key, value=dump)
|
||||
else:
|
||||
pipe.setex(name=self.key_prefix + key, value=dump, time=timeout)
|
||||
results = pipe.execute()
|
||||
res = zip(mapping.keys(), results) # noqa: B905
|
||||
return [k for k, was_set in res if was_set]
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
return bool(self._write_client.delete(self.key_prefix + key))
|
||||
|
||||
def delete_many(self, *keys: str) -> _t.List[_t.Any]:
|
||||
if not keys:
|
||||
return []
|
||||
if self.key_prefix:
|
||||
prefixed_keys = [self.key_prefix + key for key in keys]
|
||||
else:
|
||||
prefixed_keys = [k for k in keys]
|
||||
self._write_client.delete(*prefixed_keys)
|
||||
return [k for k in prefixed_keys if not self.has(k)]
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
return bool(self._read_client.exists(self.key_prefix + key))
|
||||
|
||||
def clear(self) -> bool:
|
||||
status = 0
|
||||
if self.key_prefix:
|
||||
keys = self._read_client.keys(self.key_prefix + "*")
|
||||
if keys:
|
||||
status = self._write_client.delete(*keys)
|
||||
else:
|
||||
status = self._write_client.flushdb()
|
||||
return bool(status)
|
||||
|
||||
def inc(self, key: str, delta: int = 1) -> _t.Any:
|
||||
return self._write_client.incr(name=self.key_prefix + key, amount=delta)
|
||||
|
||||
def dec(self, key: str, delta: int = 1) -> _t.Any:
|
||||
return self._write_client.incr(name=self.key_prefix + key, amount=-delta)
|
114
.env/lib/python3.10/site-packages/cachelib/serializers.py
Normal file
114
.env/lib/python3.10/site-packages/cachelib/serializers.py
Normal file
@ -0,0 +1,114 @@
|
||||
import logging
|
||||
import pickle
|
||||
import typing as _t
|
||||
|
||||
|
||||
class BaseSerializer:
|
||||
"""This is the base interface for all default serializers.
|
||||
|
||||
BaseSerializer.load and BaseSerializer.dump will
|
||||
default to pickle.load and pickle.dump. This is currently
|
||||
used only by FileSystemCache which dumps/loads to/from a file stream.
|
||||
"""
|
||||
|
||||
def _warn(self, e: pickle.PickleError) -> None:
|
||||
logging.warning(
|
||||
f"An exception has been raised during a pickling operation: {e}"
|
||||
)
|
||||
|
||||
def dump(
|
||||
self, value: int, f: _t.IO, protocol: int = pickle.HIGHEST_PROTOCOL
|
||||
) -> None:
|
||||
try:
|
||||
pickle.dump(value, f, protocol)
|
||||
except (pickle.PickleError, pickle.PicklingError) as e:
|
||||
self._warn(e)
|
||||
|
||||
def load(self, f: _t.BinaryIO) -> _t.Any:
|
||||
try:
|
||||
data = pickle.load(f)
|
||||
except pickle.PickleError as e:
|
||||
self._warn(e)
|
||||
return None
|
||||
else:
|
||||
return data
|
||||
|
||||
"""BaseSerializer.loads and BaseSerializer.dumps
|
||||
work on top of pickle.loads and pickle.dumps. Dumping/loading
|
||||
strings and byte strings is the default for most cache types.
|
||||
"""
|
||||
|
||||
def dumps(self, value: _t.Any, protocol: int = pickle.HIGHEST_PROTOCOL) -> bytes:
|
||||
try:
|
||||
serialized = pickle.dumps(value, protocol)
|
||||
except (pickle.PickleError, pickle.PicklingError) as e:
|
||||
self._warn(e)
|
||||
return serialized
|
||||
|
||||
def loads(self, bvalue: bytes) -> _t.Any:
|
||||
try:
|
||||
data = pickle.loads(bvalue)
|
||||
except pickle.PickleError as e:
|
||||
self._warn(e)
|
||||
return None
|
||||
else:
|
||||
return data
|
||||
|
||||
|
||||
"""Default serializers for each cache type.
|
||||
|
||||
The following classes can be used to further customize
|
||||
serialiation behaviour. Alternatively, any serializer can be
|
||||
overriden in order to use a custom serializer with a different
|
||||
strategy altogether.
|
||||
"""
|
||||
|
||||
|
||||
class UWSGISerializer(BaseSerializer):
|
||||
"""Default serializer for UWSGICache."""
|
||||
|
||||
|
||||
class SimpleSerializer(BaseSerializer):
|
||||
"""Default serializer for SimpleCache."""
|
||||
|
||||
|
||||
class FileSystemSerializer(BaseSerializer):
|
||||
"""Default serializer for FileSystemCache."""
|
||||
|
||||
|
||||
class RedisSerializer(BaseSerializer):
|
||||
"""Default serializer for RedisCache."""
|
||||
|
||||
def dumps(self, value: _t.Any, protocol: int = pickle.HIGHEST_PROTOCOL) -> bytes:
|
||||
"""Dumps an object into a string for redis. By default it serializes
|
||||
integers as regular string and pickle dumps everything else.
|
||||
"""
|
||||
return b"!" + pickle.dumps(value, protocol)
|
||||
|
||||
def loads(self, value: _t.Optional[bytes]) -> _t.Any:
|
||||
"""The reversal of :meth:`dump_object`. This might be called with
|
||||
None.
|
||||
"""
|
||||
if value is None:
|
||||
return None
|
||||
if value.startswith(b"!"):
|
||||
try:
|
||||
return pickle.loads(value[1:])
|
||||
except pickle.PickleError:
|
||||
return None
|
||||
try:
|
||||
return int(value)
|
||||
except ValueError:
|
||||
# before 0.8 we did not have serialization. Still support that.
|
||||
return value
|
||||
|
||||
|
||||
class DynamoDbSerializer(RedisSerializer):
|
||||
"""Default serializer for DynamoDbCache."""
|
||||
|
||||
def loads(self, value: _t.Any) -> _t.Any:
|
||||
"""The reversal of :meth:`dump_object`. This might be called with
|
||||
None.
|
||||
"""
|
||||
value = value.value
|
||||
return super().loads(value)
|
104
.env/lib/python3.10/site-packages/cachelib/simple.py
Normal file
104
.env/lib/python3.10/site-packages/cachelib/simple.py
Normal file
@ -0,0 +1,104 @@
|
||||
import typing as _t
|
||||
from time import time
|
||||
|
||||
from cachelib.base import BaseCache
|
||||
from cachelib.serializers import SimpleSerializer
|
||||
|
||||
|
||||
class SimpleCache(BaseCache):
|
||||
|
||||
"""Simple memory cache for single process environments. This class exists
|
||||
mainly for the development server and is not 100% thread safe. It tries
|
||||
to use as many atomic operations as possible and no locks for simplicity
|
||||
but it could happen under heavy load that keys are added multiple times.
|
||||
|
||||
:param threshold: the maximum number of items the cache stores before
|
||||
it starts deleting some.
|
||||
:param default_timeout: the default timeout that is used if no timeout is
|
||||
specified on :meth:`~BaseCache.set`. A timeout of
|
||||
0 indicates that the cache never expires.
|
||||
"""
|
||||
|
||||
serializer = SimpleSerializer()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
threshold: int = 500,
|
||||
default_timeout: int = 300,
|
||||
):
|
||||
BaseCache.__init__(self, default_timeout)
|
||||
self._cache: _t.Dict[str, _t.Any] = {}
|
||||
self._threshold = threshold or 500 # threshold = 0
|
||||
|
||||
def _over_threshold(self) -> bool:
|
||||
return len(self._cache) > self._threshold
|
||||
|
||||
def _remove_expired(self, now: float) -> None:
|
||||
toremove = [k for k, (expires, _) in self._cache.items() if expires < now]
|
||||
for k in toremove:
|
||||
self._cache.pop(k, None)
|
||||
|
||||
def _remove_older(self) -> None:
|
||||
k_ordered = (
|
||||
k
|
||||
for k, v in sorted(
|
||||
self._cache.items(), key=lambda item: item[1][0] # type: ignore
|
||||
)
|
||||
)
|
||||
for k in k_ordered:
|
||||
self._cache.pop(k, None)
|
||||
if not self._over_threshold():
|
||||
break
|
||||
|
||||
def _prune(self) -> None:
|
||||
if self._over_threshold():
|
||||
now = time()
|
||||
self._remove_expired(now)
|
||||
# remove older items if still over threshold
|
||||
if self._over_threshold():
|
||||
self._remove_older()
|
||||
|
||||
def _normalize_timeout(self, timeout: _t.Optional[int]) -> int:
|
||||
timeout = BaseCache._normalize_timeout(self, timeout)
|
||||
if timeout > 0:
|
||||
timeout = int(time()) + timeout
|
||||
return timeout
|
||||
|
||||
def get(self, key: str) -> _t.Any:
|
||||
try:
|
||||
expires, value = self._cache[key]
|
||||
if expires == 0 or expires > time():
|
||||
return self.serializer.loads(value)
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
def set(
|
||||
self, key: str, value: _t.Any, timeout: _t.Optional[int] = None
|
||||
) -> _t.Optional[bool]:
|
||||
expires = self._normalize_timeout(timeout)
|
||||
self._prune()
|
||||
self._cache[key] = (expires, self.serializer.dumps(value))
|
||||
return True
|
||||
|
||||
def add(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> bool:
|
||||
expires = self._normalize_timeout(timeout)
|
||||
self._prune()
|
||||
item = (expires, self.serializer.dumps(value))
|
||||
if key in self._cache:
|
||||
return False
|
||||
self._cache.setdefault(key, item)
|
||||
return True
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
return self._cache.pop(key, None) is not None
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
try:
|
||||
expires, value = self._cache[key]
|
||||
return bool(expires == 0 or expires > time())
|
||||
except KeyError:
|
||||
return False
|
||||
|
||||
def clear(self) -> bool:
|
||||
self._cache.clear()
|
||||
return not bool(self._cache)
|
83
.env/lib/python3.10/site-packages/cachelib/uwsgi.py
Normal file
83
.env/lib/python3.10/site-packages/cachelib/uwsgi.py
Normal file
@ -0,0 +1,83 @@
|
||||
import platform
|
||||
import typing as _t
|
||||
|
||||
from cachelib.base import BaseCache
|
||||
from cachelib.serializers import UWSGISerializer
|
||||
|
||||
|
||||
class UWSGICache(BaseCache):
|
||||
"""Implements the cache using uWSGI's caching framework.
|
||||
|
||||
.. note::
|
||||
This class cannot be used when running under PyPy, because the uWSGI
|
||||
API implementation for PyPy is lacking the needed functionality.
|
||||
|
||||
:param default_timeout: The default timeout in seconds.
|
||||
:param cache: The name of the caching instance to connect to, for
|
||||
example: mycache@localhost:3031, defaults to an empty string, which
|
||||
means uWSGI will cache in the local instance. If the cache is in the
|
||||
same instance as the werkzeug app, you only have to provide the name of
|
||||
the cache.
|
||||
"""
|
||||
|
||||
serializer = UWSGISerializer()
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
default_timeout: int = 300,
|
||||
cache: str = "",
|
||||
):
|
||||
BaseCache.__init__(self, default_timeout)
|
||||
|
||||
if platform.python_implementation() == "PyPy":
|
||||
raise RuntimeError(
|
||||
"uWSGI caching does not work under PyPy, see "
|
||||
"the docs for more details."
|
||||
)
|
||||
|
||||
try:
|
||||
import uwsgi # type: ignore
|
||||
|
||||
self._uwsgi = uwsgi
|
||||
except ImportError as err:
|
||||
raise RuntimeError(
|
||||
"uWSGI could not be imported, are you running under uWSGI?"
|
||||
) from err
|
||||
|
||||
self.cache = cache
|
||||
|
||||
def get(self, key: str) -> _t.Any:
|
||||
rv = self._uwsgi.cache_get(key, self.cache)
|
||||
if rv is None:
|
||||
return
|
||||
return self.serializer.loads(rv)
|
||||
|
||||
def delete(self, key: str) -> bool:
|
||||
return bool(self._uwsgi.cache_del(key, self.cache))
|
||||
|
||||
def set(
|
||||
self, key: str, value: _t.Any, timeout: _t.Optional[int] = None
|
||||
) -> _t.Optional[bool]:
|
||||
result = self._uwsgi.cache_update(
|
||||
key,
|
||||
self.serializer.dumps(value),
|
||||
self._normalize_timeout(timeout),
|
||||
self.cache,
|
||||
) # type: bool
|
||||
return result
|
||||
|
||||
def add(self, key: str, value: _t.Any, timeout: _t.Optional[int] = None) -> bool:
|
||||
return bool(
|
||||
self._uwsgi.cache_set(
|
||||
key,
|
||||
self.serializer.dumps(value),
|
||||
self._normalize_timeout(timeout),
|
||||
self.cache,
|
||||
)
|
||||
)
|
||||
|
||||
def clear(self) -> bool:
|
||||
return bool(self._uwsgi.cache_clear(self.cache))
|
||||
|
||||
def has(self, key: str) -> bool:
|
||||
return self._uwsgi.cache_exists(key, self.cache) is not None
|
109
.env/lib/python3.10/site-packages/flask_session/__init__.py
Normal file
109
.env/lib/python3.10/site-packages/flask_session/__init__.py
Normal file
@ -0,0 +1,109 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
flask_session
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Adds server session support to your application.
|
||||
|
||||
:copyright: (c) 2014 by Shipeng Feng.
|
||||
:license: BSD, see LICENSE for more details.
|
||||
"""
|
||||
|
||||
__version__ = '0.4.0'
|
||||
|
||||
import os
|
||||
|
||||
from .sessions import NullSessionInterface, RedisSessionInterface, \
|
||||
MemcachedSessionInterface, FileSystemSessionInterface, \
|
||||
MongoDBSessionInterface, SqlAlchemySessionInterface
|
||||
|
||||
|
||||
class Session(object):
|
||||
"""This class is used to add Server-side Session to one or more Flask
|
||||
applications.
|
||||
|
||||
There are two usage modes. One is initialize the instance with a very
|
||||
specific Flask application::
|
||||
|
||||
app = Flask(__name__)
|
||||
Session(app)
|
||||
|
||||
The second possibility is to create the object once and configure the
|
||||
application later::
|
||||
|
||||
sess = Session()
|
||||
|
||||
def create_app():
|
||||
app = Flask(__name__)
|
||||
sess.init_app(app)
|
||||
return app
|
||||
|
||||
By default Flask-Session will use :class:`NullSessionInterface`, you
|
||||
really should configurate your app to use a different SessionInterface.
|
||||
|
||||
.. note::
|
||||
|
||||
You can not use ``Session`` instance directly, what ``Session`` does
|
||||
is just change the :attr:`~flask.Flask.session_interface` attribute on
|
||||
your Flask applications.
|
||||
"""
|
||||
|
||||
def __init__(self, app=None):
|
||||
self.app = app
|
||||
if app is not None:
|
||||
self.init_app(app)
|
||||
|
||||
def init_app(self, app):
|
||||
"""This is used to set up session for your app object.
|
||||
|
||||
:param app: the Flask app object with proper configuration.
|
||||
"""
|
||||
app.session_interface = self._get_interface(app)
|
||||
|
||||
def _get_interface(self, app):
|
||||
config = app.config.copy()
|
||||
config.setdefault('SESSION_TYPE', 'null')
|
||||
config.setdefault('SESSION_PERMANENT', True)
|
||||
config.setdefault('SESSION_USE_SIGNER', False)
|
||||
config.setdefault('SESSION_KEY_PREFIX', 'session:')
|
||||
config.setdefault('SESSION_REDIS', None)
|
||||
config.setdefault('SESSION_MEMCACHED', None)
|
||||
config.setdefault('SESSION_FILE_DIR',
|
||||
os.path.join(os.getcwd(), 'flask_session'))
|
||||
config.setdefault('SESSION_FILE_THRESHOLD', 500)
|
||||
config.setdefault('SESSION_FILE_MODE', 384)
|
||||
config.setdefault('SESSION_MONGODB', None)
|
||||
config.setdefault('SESSION_MONGODB_DB', 'flask_session')
|
||||
config.setdefault('SESSION_MONGODB_COLLECT', 'sessions')
|
||||
config.setdefault('SESSION_SQLALCHEMY', None)
|
||||
config.setdefault('SESSION_SQLALCHEMY_TABLE', 'sessions')
|
||||
|
||||
if config['SESSION_TYPE'] == 'redis':
|
||||
session_interface = RedisSessionInterface(
|
||||
config['SESSION_REDIS'], config['SESSION_KEY_PREFIX'],
|
||||
config['SESSION_USE_SIGNER'], config['SESSION_PERMANENT'])
|
||||
elif config['SESSION_TYPE'] == 'memcached':
|
||||
session_interface = MemcachedSessionInterface(
|
||||
config['SESSION_MEMCACHED'], config['SESSION_KEY_PREFIX'],
|
||||
config['SESSION_USE_SIGNER'], config['SESSION_PERMANENT'])
|
||||
elif config['SESSION_TYPE'] == 'filesystem':
|
||||
session_interface = FileSystemSessionInterface(
|
||||
config['SESSION_FILE_DIR'], config['SESSION_FILE_THRESHOLD'],
|
||||
config['SESSION_FILE_MODE'], config['SESSION_KEY_PREFIX'],
|
||||
config['SESSION_USE_SIGNER'], config['SESSION_PERMANENT'])
|
||||
elif config['SESSION_TYPE'] == 'mongodb':
|
||||
session_interface = MongoDBSessionInterface(
|
||||
config['SESSION_MONGODB'], config['SESSION_MONGODB_DB'],
|
||||
config['SESSION_MONGODB_COLLECT'],
|
||||
config['SESSION_KEY_PREFIX'], config['SESSION_USE_SIGNER'],
|
||||
config['SESSION_PERMANENT'])
|
||||
elif config['SESSION_TYPE'] == 'sqlalchemy':
|
||||
session_interface = SqlAlchemySessionInterface(
|
||||
app, config['SESSION_SQLALCHEMY'],
|
||||
config['SESSION_SQLALCHEMY_TABLE'],
|
||||
config['SESSION_KEY_PREFIX'], config['SESSION_USE_SIGNER'],
|
||||
config['SESSION_PERMANENT'])
|
||||
else:
|
||||
session_interface = NullSessionInterface()
|
||||
|
||||
return session_interface
|
Binary file not shown.
Binary file not shown.
586
.env/lib/python3.10/site-packages/flask_session/sessions.py
Normal file
586
.env/lib/python3.10/site-packages/flask_session/sessions.py
Normal file
@ -0,0 +1,586 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
flask_session.sessions
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Server-side Sessions and SessionInterfaces.
|
||||
|
||||
:copyright: (c) 2014 by Shipeng Feng.
|
||||
:license: BSD, see LICENSE for more details.
|
||||
"""
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime
|
||||
from uuid import uuid4
|
||||
try:
|
||||
import cPickle as pickle
|
||||
except ImportError:
|
||||
import pickle
|
||||
|
||||
from flask.sessions import SessionInterface as FlaskSessionInterface
|
||||
from flask.sessions import SessionMixin
|
||||
from werkzeug.datastructures import CallbackDict
|
||||
from itsdangerous import Signer, BadSignature, want_bytes
|
||||
|
||||
|
||||
PY2 = sys.version_info[0] == 2
|
||||
if not PY2:
|
||||
text_type = str
|
||||
else:
|
||||
text_type = unicode
|
||||
|
||||
|
||||
def total_seconds(td):
|
||||
return td.days * 60 * 60 * 24 + td.seconds
|
||||
|
||||
|
||||
class ServerSideSession(CallbackDict, SessionMixin):
|
||||
"""Baseclass for server-side based sessions."""
|
||||
|
||||
def __init__(self, initial=None, sid=None, permanent=None):
|
||||
def on_update(self):
|
||||
self.modified = True
|
||||
CallbackDict.__init__(self, initial, on_update)
|
||||
self.sid = sid
|
||||
if permanent:
|
||||
self.permanent = permanent
|
||||
self.modified = False
|
||||
|
||||
|
||||
class RedisSession(ServerSideSession):
|
||||
pass
|
||||
|
||||
|
||||
class MemcachedSession(ServerSideSession):
|
||||
pass
|
||||
|
||||
|
||||
class FileSystemSession(ServerSideSession):
|
||||
pass
|
||||
|
||||
|
||||
class MongoDBSession(ServerSideSession):
|
||||
pass
|
||||
|
||||
|
||||
class SqlAlchemySession(ServerSideSession):
|
||||
pass
|
||||
|
||||
|
||||
class SessionInterface(FlaskSessionInterface):
|
||||
|
||||
def _generate_sid(self):
|
||||
return str(uuid4())
|
||||
|
||||
def _get_signer(self, app):
|
||||
if not app.secret_key:
|
||||
return None
|
||||
return Signer(app.secret_key, salt='flask-session',
|
||||
key_derivation='hmac')
|
||||
|
||||
|
||||
class NullSessionInterface(SessionInterface):
|
||||
"""Used to open a :class:`flask.sessions.NullSession` instance.
|
||||
"""
|
||||
|
||||
def open_session(self, app, request):
|
||||
return None
|
||||
|
||||
|
||||
class RedisSessionInterface(SessionInterface):
|
||||
"""Uses the Redis key-value store as a session backend.
|
||||
|
||||
.. versionadded:: 0.2
|
||||
The `use_signer` parameter was added.
|
||||
|
||||
:param redis: A ``redis.Redis`` instance.
|
||||
:param key_prefix: A prefix that is added to all Redis store keys.
|
||||
:param use_signer: Whether to sign the session id cookie or not.
|
||||
:param permanent: Whether to use permanent session or not.
|
||||
"""
|
||||
|
||||
serializer = pickle
|
||||
session_class = RedisSession
|
||||
|
||||
def __init__(self, redis, key_prefix, use_signer=False, permanent=True):
|
||||
if redis is None:
|
||||
from redis import Redis
|
||||
redis = Redis()
|
||||
self.redis = redis
|
||||
self.key_prefix = key_prefix
|
||||
self.use_signer = use_signer
|
||||
self.permanent = permanent
|
||||
self.has_same_site_capability = hasattr(self, "get_cookie_samesite")
|
||||
|
||||
def open_session(self, app, request):
|
||||
sid = request.cookies.get(app.session_cookie_name)
|
||||
if not sid:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
if self.use_signer:
|
||||
signer = self._get_signer(app)
|
||||
if signer is None:
|
||||
return None
|
||||
try:
|
||||
sid_as_bytes = signer.unsign(sid)
|
||||
sid = sid_as_bytes.decode()
|
||||
except BadSignature:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
if not PY2 and not isinstance(sid, text_type):
|
||||
sid = sid.decode('utf-8', 'strict')
|
||||
val = self.redis.get(self.key_prefix + sid)
|
||||
if val is not None:
|
||||
try:
|
||||
data = self.serializer.loads(val)
|
||||
return self.session_class(data, sid=sid)
|
||||
except:
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
def save_session(self, app, session, response):
|
||||
domain = self.get_cookie_domain(app)
|
||||
path = self.get_cookie_path(app)
|
||||
if not session:
|
||||
if session.modified:
|
||||
self.redis.delete(self.key_prefix + session.sid)
|
||||
response.delete_cookie(app.session_cookie_name,
|
||||
domain=domain, path=path)
|
||||
return
|
||||
|
||||
# Modification case. There are upsides and downsides to
|
||||
# emitting a set-cookie header each request. The behavior
|
||||
# is controlled by the :meth:`should_set_cookie` method
|
||||
# which performs a quick check to figure out if the cookie
|
||||
# should be set or not. This is controlled by the
|
||||
# SESSION_REFRESH_EACH_REQUEST config flag as well as
|
||||
# the permanent flag on the session itself.
|
||||
# if not self.should_set_cookie(app, session):
|
||||
# return
|
||||
conditional_cookie_kwargs = {}
|
||||
httponly = self.get_cookie_httponly(app)
|
||||
secure = self.get_cookie_secure(app)
|
||||
if self.has_same_site_capability:
|
||||
conditional_cookie_kwargs["samesite"] = self.get_cookie_samesite(app)
|
||||
expires = self.get_expiration_time(app, session)
|
||||
val = self.serializer.dumps(dict(session))
|
||||
self.redis.setex(name=self.key_prefix + session.sid, value=val,
|
||||
time=total_seconds(app.permanent_session_lifetime))
|
||||
if self.use_signer:
|
||||
session_id = self._get_signer(app).sign(want_bytes(session.sid))
|
||||
else:
|
||||
session_id = session.sid
|
||||
response.set_cookie(app.session_cookie_name, session_id,
|
||||
expires=expires, httponly=httponly,
|
||||
domain=domain, path=path, secure=secure,
|
||||
**conditional_cookie_kwargs)
|
||||
|
||||
|
||||
class MemcachedSessionInterface(SessionInterface):
|
||||
"""A Session interface that uses memcached as backend.
|
||||
|
||||
.. versionadded:: 0.2
|
||||
The `use_signer` parameter was added.
|
||||
|
||||
:param client: A ``memcache.Client`` instance.
|
||||
:param key_prefix: A prefix that is added to all Memcached store keys.
|
||||
:param use_signer: Whether to sign the session id cookie or not.
|
||||
:param permanent: Whether to use permanent session or not.
|
||||
"""
|
||||
|
||||
serializer = pickle
|
||||
session_class = MemcachedSession
|
||||
|
||||
def __init__(self, client, key_prefix, use_signer=False, permanent=True):
|
||||
if client is None:
|
||||
client = self._get_preferred_memcache_client()
|
||||
if client is None:
|
||||
raise RuntimeError('no memcache module found')
|
||||
self.client = client
|
||||
self.key_prefix = key_prefix
|
||||
self.use_signer = use_signer
|
||||
self.permanent = permanent
|
||||
self.has_same_site_capability = hasattr(self, "get_cookie_samesite")
|
||||
|
||||
def _get_preferred_memcache_client(self):
|
||||
servers = ['127.0.0.1:11211']
|
||||
try:
|
||||
import pylibmc
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
return pylibmc.Client(servers)
|
||||
|
||||
try:
|
||||
import memcache
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
return memcache.Client(servers)
|
||||
|
||||
def _get_memcache_timeout(self, timeout):
|
||||
"""
|
||||
Memcached deals with long (> 30 days) timeouts in a special
|
||||
way. Call this function to obtain a safe value for your timeout.
|
||||
"""
|
||||
if timeout > 2592000: # 60*60*24*30, 30 days
|
||||
# See http://code.google.com/p/memcached/wiki/FAQ
|
||||
# "You can set expire times up to 30 days in the future. After that
|
||||
# memcached interprets it as a date, and will expire the item after
|
||||
# said date. This is a simple (but obscure) mechanic."
|
||||
#
|
||||
# This means that we have to switch to absolute timestamps.
|
||||
timeout += int(time.time())
|
||||
return timeout
|
||||
|
||||
def open_session(self, app, request):
|
||||
sid = request.cookies.get(app.session_cookie_name)
|
||||
if not sid:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
if self.use_signer:
|
||||
signer = self._get_signer(app)
|
||||
if signer is None:
|
||||
return None
|
||||
try:
|
||||
sid_as_bytes = signer.unsign(sid)
|
||||
sid = sid_as_bytes.decode()
|
||||
except BadSignature:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
full_session_key = self.key_prefix + sid
|
||||
if PY2 and isinstance(full_session_key, unicode):
|
||||
full_session_key = full_session_key.encode('utf-8')
|
||||
val = self.client.get(full_session_key)
|
||||
if val is not None:
|
||||
try:
|
||||
if not PY2:
|
||||
val = want_bytes(val)
|
||||
data = self.serializer.loads(val)
|
||||
return self.session_class(data, sid=sid)
|
||||
except:
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
def save_session(self, app, session, response):
|
||||
domain = self.get_cookie_domain(app)
|
||||
path = self.get_cookie_path(app)
|
||||
full_session_key = self.key_prefix + session.sid
|
||||
if PY2 and isinstance(full_session_key, unicode):
|
||||
full_session_key = full_session_key.encode('utf-8')
|
||||
if not session:
|
||||
if session.modified:
|
||||
self.client.delete(full_session_key)
|
||||
response.delete_cookie(app.session_cookie_name,
|
||||
domain=domain, path=path)
|
||||
return
|
||||
|
||||
conditional_cookie_kwargs = {}
|
||||
httponly = self.get_cookie_httponly(app)
|
||||
secure = self.get_cookie_secure(app)
|
||||
if self.has_same_site_capability:
|
||||
conditional_cookie_kwargs["samesite"] = self.get_cookie_samesite(app)
|
||||
expires = self.get_expiration_time(app, session)
|
||||
if not PY2:
|
||||
val = self.serializer.dumps(dict(session), 0)
|
||||
else:
|
||||
val = self.serializer.dumps(dict(session))
|
||||
self.client.set(full_session_key, val, self._get_memcache_timeout(
|
||||
total_seconds(app.permanent_session_lifetime)))
|
||||
if self.use_signer:
|
||||
session_id = self._get_signer(app).sign(want_bytes(session.sid))
|
||||
else:
|
||||
session_id = session.sid
|
||||
response.set_cookie(app.session_cookie_name, session_id,
|
||||
expires=expires, httponly=httponly,
|
||||
domain=domain, path=path, secure=secure,
|
||||
**conditional_cookie_kwargs)
|
||||
|
||||
|
||||
class FileSystemSessionInterface(SessionInterface):
|
||||
"""Uses the :class:`cachelib.file.FileSystemCache` as a session backend.
|
||||
|
||||
.. versionadded:: 0.2
|
||||
The `use_signer` parameter was added.
|
||||
|
||||
:param cache_dir: the directory where session files are stored.
|
||||
:param threshold: the maximum number of items the session stores before it
|
||||
starts deleting some.
|
||||
:param mode: the file mode wanted for the session files, default 0600
|
||||
:param key_prefix: A prefix that is added to FileSystemCache store keys.
|
||||
:param use_signer: Whether to sign the session id cookie or not.
|
||||
:param permanent: Whether to use permanent session or not.
|
||||
"""
|
||||
|
||||
session_class = FileSystemSession
|
||||
|
||||
def __init__(self, cache_dir, threshold, mode, key_prefix,
|
||||
use_signer=False, permanent=True):
|
||||
from cachelib.file import FileSystemCache
|
||||
self.cache = FileSystemCache(cache_dir, threshold=threshold, mode=mode)
|
||||
self.key_prefix = key_prefix
|
||||
self.use_signer = use_signer
|
||||
self.permanent = permanent
|
||||
self.has_same_site_capability = hasattr(self, "get_cookie_samesite")
|
||||
|
||||
def open_session(self, app, request):
|
||||
sid = request.cookies.get(app.session_cookie_name)
|
||||
if not sid:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
if self.use_signer:
|
||||
signer = self._get_signer(app)
|
||||
if signer is None:
|
||||
return None
|
||||
try:
|
||||
sid_as_bytes = signer.unsign(sid)
|
||||
sid = sid_as_bytes.decode()
|
||||
except BadSignature:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
data = self.cache.get(self.key_prefix + sid)
|
||||
if data is not None:
|
||||
return self.session_class(data, sid=sid)
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
def save_session(self, app, session, response):
|
||||
domain = self.get_cookie_domain(app)
|
||||
path = self.get_cookie_path(app)
|
||||
if not session:
|
||||
if session.modified:
|
||||
self.cache.delete(self.key_prefix + session.sid)
|
||||
response.delete_cookie(app.session_cookie_name,
|
||||
domain=domain, path=path)
|
||||
return
|
||||
|
||||
conditional_cookie_kwargs = {}
|
||||
httponly = self.get_cookie_httponly(app)
|
||||
secure = self.get_cookie_secure(app)
|
||||
if self.has_same_site_capability:
|
||||
conditional_cookie_kwargs["samesite"] = self.get_cookie_samesite(app)
|
||||
expires = self.get_expiration_time(app, session)
|
||||
data = dict(session)
|
||||
self.cache.set(self.key_prefix + session.sid, data,
|
||||
total_seconds(app.permanent_session_lifetime))
|
||||
if self.use_signer:
|
||||
session_id = self._get_signer(app).sign(want_bytes(session.sid))
|
||||
else:
|
||||
session_id = session.sid
|
||||
response.set_cookie(app.session_cookie_name, session_id,
|
||||
expires=expires, httponly=httponly,
|
||||
domain=domain, path=path, secure=secure,
|
||||
**conditional_cookie_kwargs)
|
||||
|
||||
|
||||
class MongoDBSessionInterface(SessionInterface):
|
||||
"""A Session interface that uses mongodb as backend.
|
||||
|
||||
.. versionadded:: 0.2
|
||||
The `use_signer` parameter was added.
|
||||
|
||||
:param client: A ``pymongo.MongoClient`` instance.
|
||||
:param db: The database you want to use.
|
||||
:param collection: The collection you want to use.
|
||||
:param key_prefix: A prefix that is added to all MongoDB store keys.
|
||||
:param use_signer: Whether to sign the session id cookie or not.
|
||||
:param permanent: Whether to use permanent session or not.
|
||||
"""
|
||||
|
||||
serializer = pickle
|
||||
session_class = MongoDBSession
|
||||
|
||||
def __init__(self, client, db, collection, key_prefix, use_signer=False,
|
||||
permanent=True):
|
||||
if client is None:
|
||||
from pymongo import MongoClient
|
||||
client = MongoClient()
|
||||
self.client = client
|
||||
self.store = client[db][collection]
|
||||
self.key_prefix = key_prefix
|
||||
self.use_signer = use_signer
|
||||
self.permanent = permanent
|
||||
self.has_same_site_capability = hasattr(self, "get_cookie_samesite")
|
||||
|
||||
def open_session(self, app, request):
|
||||
sid = request.cookies.get(app.session_cookie_name)
|
||||
if not sid:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
if self.use_signer:
|
||||
signer = self._get_signer(app)
|
||||
if signer is None:
|
||||
return None
|
||||
try:
|
||||
sid_as_bytes = signer.unsign(sid)
|
||||
sid = sid_as_bytes.decode()
|
||||
except BadSignature:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
store_id = self.key_prefix + sid
|
||||
document = self.store.find_one({'id': store_id})
|
||||
if document and document.get('expiration') <= datetime.utcnow():
|
||||
# Delete expired session
|
||||
self.store.remove({'id': store_id})
|
||||
document = None
|
||||
if document is not None:
|
||||
try:
|
||||
val = document['val']
|
||||
data = self.serializer.loads(want_bytes(val))
|
||||
return self.session_class(data, sid=sid)
|
||||
except:
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
def save_session(self, app, session, response):
|
||||
domain = self.get_cookie_domain(app)
|
||||
path = self.get_cookie_path(app)
|
||||
store_id = self.key_prefix + session.sid
|
||||
if not session:
|
||||
if session.modified:
|
||||
self.store.remove({'id': store_id})
|
||||
response.delete_cookie(app.session_cookie_name,
|
||||
domain=domain, path=path)
|
||||
return
|
||||
|
||||
conditional_cookie_kwargs = {}
|
||||
httponly = self.get_cookie_httponly(app)
|
||||
secure = self.get_cookie_secure(app)
|
||||
if self.has_same_site_capability:
|
||||
conditional_cookie_kwargs["samesite"] = self.get_cookie_samesite(app)
|
||||
expires = self.get_expiration_time(app, session)
|
||||
val = self.serializer.dumps(dict(session))
|
||||
self.store.update({'id': store_id},
|
||||
{'id': store_id,
|
||||
'val': val,
|
||||
'expiration': expires}, True)
|
||||
if self.use_signer:
|
||||
session_id = self._get_signer(app).sign(want_bytes(session.sid))
|
||||
else:
|
||||
session_id = session.sid
|
||||
response.set_cookie(app.session_cookie_name, session_id,
|
||||
expires=expires, httponly=httponly,
|
||||
domain=domain, path=path, secure=secure,
|
||||
**conditional_cookie_kwargs)
|
||||
|
||||
|
||||
class SqlAlchemySessionInterface(SessionInterface):
|
||||
"""Uses the Flask-SQLAlchemy from a flask app as a session backend.
|
||||
|
||||
.. versionadded:: 0.2
|
||||
|
||||
:param app: A Flask app instance.
|
||||
:param db: A Flask-SQLAlchemy instance.
|
||||
:param table: The table name you want to use.
|
||||
:param key_prefix: A prefix that is added to all store keys.
|
||||
:param use_signer: Whether to sign the session id cookie or not.
|
||||
:param permanent: Whether to use permanent session or not.
|
||||
"""
|
||||
|
||||
serializer = pickle
|
||||
session_class = SqlAlchemySession
|
||||
|
||||
def __init__(self, app, db, table, key_prefix, use_signer=False,
|
||||
permanent=True):
|
||||
if db is None:
|
||||
from flask_sqlalchemy import SQLAlchemy
|
||||
db = SQLAlchemy(app)
|
||||
self.db = db
|
||||
self.key_prefix = key_prefix
|
||||
self.use_signer = use_signer
|
||||
self.permanent = permanent
|
||||
self.has_same_site_capability = hasattr(self, "get_cookie_samesite")
|
||||
|
||||
class Session(self.db.Model):
|
||||
__tablename__ = table
|
||||
|
||||
id = self.db.Column(self.db.Integer, primary_key=True)
|
||||
session_id = self.db.Column(self.db.String(255), unique=True)
|
||||
data = self.db.Column(self.db.LargeBinary)
|
||||
expiry = self.db.Column(self.db.DateTime)
|
||||
|
||||
def __init__(self, session_id, data, expiry):
|
||||
self.session_id = session_id
|
||||
self.data = data
|
||||
self.expiry = expiry
|
||||
|
||||
def __repr__(self):
|
||||
return '<Session data %s>' % self.data
|
||||
|
||||
# self.db.create_all()
|
||||
self.sql_session_model = Session
|
||||
|
||||
def open_session(self, app, request):
|
||||
sid = request.cookies.get(app.session_cookie_name)
|
||||
if not sid:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
if self.use_signer:
|
||||
signer = self._get_signer(app)
|
||||
if signer is None:
|
||||
return None
|
||||
try:
|
||||
sid_as_bytes = signer.unsign(sid)
|
||||
sid = sid_as_bytes.decode()
|
||||
except BadSignature:
|
||||
sid = self._generate_sid()
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
store_id = self.key_prefix + sid
|
||||
saved_session = self.sql_session_model.query.filter_by(
|
||||
session_id=store_id).first()
|
||||
if saved_session and saved_session.expiry <= datetime.utcnow():
|
||||
# Delete expired session
|
||||
self.db.session.delete(saved_session)
|
||||
self.db.session.commit()
|
||||
saved_session = None
|
||||
if saved_session:
|
||||
try:
|
||||
val = saved_session.data
|
||||
data = self.serializer.loads(want_bytes(val))
|
||||
return self.session_class(data, sid=sid)
|
||||
except:
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
return self.session_class(sid=sid, permanent=self.permanent)
|
||||
|
||||
def save_session(self, app, session, response):
|
||||
domain = self.get_cookie_domain(app)
|
||||
path = self.get_cookie_path(app)
|
||||
store_id = self.key_prefix + session.sid
|
||||
saved_session = self.sql_session_model.query.filter_by(
|
||||
session_id=store_id).first()
|
||||
if not session:
|
||||
if session.modified:
|
||||
if saved_session:
|
||||
self.db.session.delete(saved_session)
|
||||
self.db.session.commit()
|
||||
response.delete_cookie(app.session_cookie_name,
|
||||
domain=domain, path=path)
|
||||
return
|
||||
|
||||
conditional_cookie_kwargs = {}
|
||||
httponly = self.get_cookie_httponly(app)
|
||||
secure = self.get_cookie_secure(app)
|
||||
if self.has_same_site_capability:
|
||||
conditional_cookie_kwargs["samesite"] = self.get_cookie_samesite(app)
|
||||
expires = self.get_expiration_time(app, session)
|
||||
val = self.serializer.dumps(dict(session))
|
||||
if saved_session:
|
||||
saved_session.data = val
|
||||
saved_session.expiry = expires
|
||||
self.db.session.commit()
|
||||
else:
|
||||
new_session = self.sql_session_model(store_id, val, expires)
|
||||
self.db.session.add(new_session)
|
||||
self.db.session.commit()
|
||||
if self.use_signer:
|
||||
session_id = self._get_signer(app).sign(want_bytes(session.sid))
|
||||
else:
|
||||
session_id = session.sid
|
||||
response.set_cookie(app.session_cookie_name, session_id,
|
||||
expires=expires, httponly=httponly,
|
||||
domain=domain, path=path, secure=secure,
|
||||
**conditional_cookie_kwargs)
|
@ -0,0 +1 @@
|
||||
pip
|
@ -0,0 +1,7 @@
|
||||
Copyright (C) 2013 Markus Siemens <markus@m-siemens.de>
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
@ -0,0 +1,176 @@
|
||||
Metadata-Version: 2.1
|
||||
Name: tinydb
|
||||
Version: 4.7.1
|
||||
Summary: TinyDB is a tiny, document oriented database optimized for your happiness :)
|
||||
Home-page: https://github.com/msiemens/tinydb
|
||||
License: MIT
|
||||
Keywords: database,nosql
|
||||
Author: Markus Siemens
|
||||
Author-email: markus@m-siemens.de
|
||||
Requires-Python: >=3.7,<4.0
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: Intended Audience :: System Administrators
|
||||
Classifier: License :: OSI Approved :: MIT License
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: 3.9
|
||||
Classifier: Programming Language :: Python :: 3.10
|
||||
Classifier: Programming Language :: Python :: 3.11
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.10
|
||||
Classifier: Programming Language :: Python :: 3.11
|
||||
Classifier: Programming Language :: Python :: 3.7
|
||||
Classifier: Programming Language :: Python :: 3.8
|
||||
Classifier: Programming Language :: Python :: 3.9
|
||||
Classifier: Programming Language :: Python :: Implementation :: CPython
|
||||
Classifier: Programming Language :: Python :: Implementation :: PyPy
|
||||
Classifier: Topic :: Database
|
||||
Classifier: Topic :: Database :: Database Engines/Servers
|
||||
Classifier: Topic :: Utilities
|
||||
Classifier: Typing :: Typed
|
||||
Requires-Dist: typing-extensions (>=3.10.0,<5.0.0) ; python_full_version <= "3.7.0"
|
||||
Project-URL: Changelog, https://github.com/msiemens/tinydb/en/latest/changelog.html
|
||||
Project-URL: Documentation, https://tinydb.readthedocs.org/
|
||||
Project-URL: Issues, https://github.com/msiemens/tinydb/issues
|
||||
Description-Content-Type: text/x-rst
|
||||
|
||||
.. image:: https://raw.githubusercontent.com/msiemens/tinydb/master/artwork/logo.png
|
||||
:scale: 100%
|
||||
:height: 150px
|
||||
|
||||
|Build Status| |Coverage| |Version|
|
||||
|
||||
Quick Links
|
||||
***********
|
||||
|
||||
- `Example Code`_
|
||||
- `Supported Python Versions`_
|
||||
- `Documentation <http://tinydb.readthedocs.org/>`_
|
||||
- `Changelog <https://tinydb.readthedocs.io/en/latest/changelog.html>`_
|
||||
- `Extensions <https://tinydb.readthedocs.io/en/latest/extensions.html>`_
|
||||
- `Contributing`_
|
||||
|
||||
Introduction
|
||||
************
|
||||
|
||||
TinyDB is a lightweight document oriented database optimized for your happiness :)
|
||||
It's written in pure Python and has no external dependencies. The target are
|
||||
small apps that would be blown away by a SQL-DB or an external database server.
|
||||
|
||||
TinyDB is:
|
||||
|
||||
- **tiny:** The current source code has 1800 lines of code (with about 40%
|
||||
documentation) and 1600 lines tests.
|
||||
|
||||
- **document oriented:** Like MongoDB_, you can store any document
|
||||
(represented as ``dict``) in TinyDB.
|
||||
|
||||
- **optimized for your happiness:** TinyDB is designed to be simple and
|
||||
fun to use by providing a simple and clean API.
|
||||
|
||||
- **written in pure Python:** TinyDB neither needs an external server (as
|
||||
e.g. `PyMongo <https://api.mongodb.org/python/current/>`_) nor any dependencies
|
||||
from PyPI.
|
||||
|
||||
- **works on Python 3.7+ and PyPy3:** TinyDB works on all modern versions of Python
|
||||
and PyPy.
|
||||
|
||||
- **powerfully extensible:** You can easily extend TinyDB by writing new
|
||||
storages or modify the behaviour of storages with Middlewares.
|
||||
|
||||
- **100% test coverage:** No explanation needed.
|
||||
|
||||
To dive straight into all the details, head over to the `TinyDB docs
|
||||
<https://tinydb.readthedocs.io/>`_. You can also discuss everything related
|
||||
to TinyDB like general development, extensions or showcase your TinyDB-based
|
||||
projects on the `discussion forum <http://forum.m-siemens.de/.>`_.
|
||||
|
||||
Supported Python Versions
|
||||
*************************
|
||||
|
||||
TinyDB has been tested with Python 3.7 - 3.11 and PyPy3.
|
||||
|
||||
Example Code
|
||||
************
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> from tinydb import TinyDB, Query
|
||||
>>> db = TinyDB('/path/to/db.json')
|
||||
>>> db.insert({'int': 1, 'char': 'a'})
|
||||
>>> db.insert({'int': 1, 'char': 'b'})
|
||||
|
||||
Query Language
|
||||
==============
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> User = Query()
|
||||
>>> # Search for a field value
|
||||
>>> db.search(User.name == 'John')
|
||||
[{'name': 'John', 'age': 22}, {'name': 'John', 'age': 37}]
|
||||
|
||||
>>> # Combine two queries with logical and
|
||||
>>> db.search((User.name == 'John') & (User.age <= 30))
|
||||
[{'name': 'John', 'age': 22}]
|
||||
|
||||
>>> # Combine two queries with logical or
|
||||
>>> db.search((User.name == 'John') | (User.name == 'Bob'))
|
||||
[{'name': 'John', 'age': 22}, {'name': 'John', 'age': 37}, {'name': 'Bob', 'age': 42}]
|
||||
|
||||
>>> # Apply transformation to field with `map`
|
||||
>>> db.search((User.age.map(lambda x: x + x) == 44))
|
||||
>>> [{'name': 'John', 'age': 22}]
|
||||
|
||||
>>> # More possible comparisons: != < > <= >=
|
||||
>>> # More possible checks: where(...).matches(regex), where(...).test(your_test_func)
|
||||
|
||||
Tables
|
||||
======
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> table = db.table('name')
|
||||
>>> table.insert({'value': True})
|
||||
>>> table.all()
|
||||
[{'value': True}]
|
||||
|
||||
Using Middlewares
|
||||
=================
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
>>> from tinydb.storages import JSONStorage
|
||||
>>> from tinydb.middlewares import CachingMiddleware
|
||||
>>> db = TinyDB('/path/to/db.json', storage=CachingMiddleware(JSONStorage))
|
||||
|
||||
|
||||
Contributing
|
||||
************
|
||||
|
||||
Whether reporting bugs, discussing improvements and new ideas or writing
|
||||
extensions: Contributions to TinyDB are welcome! Here's how to get started:
|
||||
|
||||
1. Check for open issues or open a fresh issue to start a discussion around
|
||||
a feature idea or a bug
|
||||
2. Fork `the repository <https://github.com/msiemens/tinydb/>`_ on Github,
|
||||
create a new branch off the `master` branch and start making your changes
|
||||
(known as `GitHub Flow <https://guides.github.com/introduction/flow/index.html>`_)
|
||||
3. Write a test which shows that the bug was fixed or that the feature works
|
||||
as expected
|
||||
4. Send a pull request and bug the maintainer until it gets merged and
|
||||
published ☺
|
||||
|
||||
.. |Build Status| image:: https://img.shields.io/azure-devops/build/msiemens/3e5baa75-12ec-43ac-9728-89823ee8c7e2/2.svg?style=flat-square
|
||||
:target: https://dev.azure.com/msiemens/github/_build?definitionId=2
|
||||
.. |Coverage| image:: http://img.shields.io/coveralls/msiemens/tinydb.svg?style=flat-square
|
||||
:target: https://coveralls.io/r/msiemens/tinydb
|
||||
.. |Version| image:: http://img.shields.io/pypi/v/tinydb.svg?style=flat-square
|
||||
:target: https://pypi.python.org/pypi/tinydb/
|
||||
.. _Buzhug: http://buzhug.sourceforge.net/
|
||||
.. _CodernityDB: https://github.com/perchouli/codernitydb
|
||||
.. _MongoDB: http://mongodb.org/
|
||||
|
@ -0,0 +1,27 @@
|
||||
tinydb-4.7.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
|
||||
tinydb-4.7.1.dist-info/LICENSE,sha256=sOKi05Jx49lrcX176CNaxBPXR5e_f3QvCAHPqtbIyRI,1080
|
||||
tinydb-4.7.1.dist-info/METADATA,sha256=9cC_GDuJKtE-dJ9dTJ8FfUzAtJGnxgxwyp6vErAeRIM,6496
|
||||
tinydb-4.7.1.dist-info/RECORD,,
|
||||
tinydb-4.7.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
tinydb-4.7.1.dist-info/WHEEL,sha256=vVCvjcmxuUltf8cYhJ0sJMRDLr1XsPuxEId8YDzbyCY,88
|
||||
tinydb/__init__.py,sha256=KPlEk-6pg-plXWKFraWCf9DNxu7UVRIc58YkQSHVHNo,939
|
||||
tinydb/__pycache__/__init__.cpython-310.pyc,,
|
||||
tinydb/__pycache__/database.cpython-310.pyc,,
|
||||
tinydb/__pycache__/middlewares.cpython-310.pyc,,
|
||||
tinydb/__pycache__/mypy_plugin.cpython-310.pyc,,
|
||||
tinydb/__pycache__/operations.cpython-310.pyc,,
|
||||
tinydb/__pycache__/queries.cpython-310.pyc,,
|
||||
tinydb/__pycache__/storages.cpython-310.pyc,,
|
||||
tinydb/__pycache__/table.cpython-310.pyc,,
|
||||
tinydb/__pycache__/utils.cpython-310.pyc,,
|
||||
tinydb/__pycache__/version.cpython-310.pyc,,
|
||||
tinydb/database.py,sha256=ET8KSlvKRqob62yIzD1xxLDqBIdo4X-OWkazxSITEDA,8712
|
||||
tinydb/middlewares.py,sha256=61s-U6L4C9_4a8dWDlNFSOVz6-TlIUBduRrNb34-XTY,3942
|
||||
tinydb/mypy_plugin.py,sha256=Yu_wkCYgmtOEuIuF58WoBO_z72ayDEvQoP58nzLRXVU,1070
|
||||
tinydb/operations.py,sha256=CfwnI_vCMnq79VlqYyigFN1dtMI_l6weh8113-HHCC8,1155
|
||||
tinydb/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
||||
tinydb/queries.py,sha256=JujB7mDNFRs-ioRtWTPM7MqXZXBgkMHqrvO54ggGhuo,16016
|
||||
tinydb/storages.py,sha256=lDVtezCJtjgmQks2GoecatC_HxkOx8TX0iU9RRHPu8k,4726
|
||||
tinydb/table.py,sha256=o7FsQHl08uv236q4hJMbYudgJzEL0n52uf4mMMaJnHA,25207
|
||||
tinydb/utils.py,sha256=h7xiASbzg4CtHilCfHw3mAKB-ZKNv42ox9wjITdTecI,4598
|
||||
tinydb/version.py,sha256=6DFOZuafPTrDERIQgrgSkT2t4tzamH1Bxiq0u959Zbc,22
|
@ -0,0 +1,4 @@
|
||||
Wheel-Version: 1.0
|
||||
Generator: poetry-core 1.4.0
|
||||
Root-Is-Purelib: true
|
||||
Tag: py3-none-any
|
32
.env/lib/python3.10/site-packages/tinydb/__init__.py
Normal file
32
.env/lib/python3.10/site-packages/tinydb/__init__.py
Normal file
@ -0,0 +1,32 @@
|
||||
"""
|
||||
TinyDB is a tiny, document oriented database optimized for your happiness :)
|
||||
|
||||
TinyDB stores different types of Python data types using a configurable
|
||||
storage mechanism. It comes with a syntax for querying data and storing
|
||||
data in multiple tables.
|
||||
|
||||
.. codeauthor:: Markus Siemens <markus@m-siemens.de>
|
||||
|
||||
Usage example:
|
||||
|
||||
>>> from tinydb import TinyDB, where
|
||||
>>> from tinydb.storages import MemoryStorage
|
||||
>>> db = TinyDB(storage=MemoryStorage)
|
||||
>>> db.insert({'data': 5}) # Insert into '_default' table
|
||||
>>> db.search(where('data') == 5)
|
||||
[{'data': 5, '_id': 1}]
|
||||
>>> # Now let's create a new table
|
||||
>>> tbl = db.table('our_table')
|
||||
>>> for i in range(10):
|
||||
... tbl.insert({'data': i})
|
||||
...
|
||||
>>> len(tbl.search(where('data') < 5))
|
||||
5
|
||||
"""
|
||||
|
||||
from .queries import Query, where
|
||||
from .storages import Storage, JSONStorage
|
||||
from .database import TinyDB
|
||||
from .version import __version__
|
||||
|
||||
__all__ = ('TinyDB', 'Storage', 'JSONStorage', 'Query', 'where')
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
274
.env/lib/python3.10/site-packages/tinydb/database.py
Normal file
274
.env/lib/python3.10/site-packages/tinydb/database.py
Normal file
@ -0,0 +1,274 @@
|
||||
"""
|
||||
This module contains the main component of TinyDB: the database.
|
||||
"""
|
||||
from typing import Dict, Iterator, Set, Type
|
||||
|
||||
from . import JSONStorage
|
||||
from .storages import Storage
|
||||
from .table import Table, Document
|
||||
from .utils import with_typehint
|
||||
|
||||
# The table's base class. This is used to add type hinting from the Table
|
||||
# class to TinyDB. Currently, this supports PyCharm, Pyright/VS Code and MyPy.
|
||||
TableBase: Type[Table] = with_typehint(Table)
|
||||
|
||||
|
||||
class TinyDB(TableBase):
|
||||
"""
|
||||
The main class of TinyDB.
|
||||
|
||||
The ``TinyDB`` class is responsible for creating the storage class instance
|
||||
that will store this database's documents, managing the database
|
||||
tables as well as providing access to the default table.
|
||||
|
||||
For table management, a simple ``dict`` is used that stores the table class
|
||||
instances accessible using their table name.
|
||||
|
||||
Default table access is provided by forwarding all unknown method calls
|
||||
and property access operations to the default table by implementing
|
||||
``__getattr__``.
|
||||
|
||||
When creating a new instance, all arguments and keyword arguments (except
|
||||
for ``storage``) will be passed to the storage class that is provided. If
|
||||
no storage class is specified, :class:`~tinydb.storages.JSONStorage` will be
|
||||
used.
|
||||
|
||||
.. admonition:: Customization
|
||||
|
||||
For customization, the following class variables can be set:
|
||||
|
||||
- ``table_class`` defines the class that is used to create tables,
|
||||
- ``default_table_name`` defines the name of the default table, and
|
||||
- ``default_storage_class`` will define the class that will be used to
|
||||
create storage instances if no other storage is passed.
|
||||
|
||||
.. versionadded:: 4.0
|
||||
|
||||
.. admonition:: Data Storage Model
|
||||
|
||||
Data is stored using a storage class that provides persistence for a
|
||||
``dict`` instance. This ``dict`` contains all tables and their data.
|
||||
The data is modelled like this::
|
||||
|
||||
{
|
||||
'table1': {
|
||||
0: {document...},
|
||||
1: {document...},
|
||||
},
|
||||
'table2': {
|
||||
...
|
||||
}
|
||||
}
|
||||
|
||||
Each entry in this ``dict`` uses the table name as its key and a
|
||||
``dict`` of documents as its value. The document ``dict`` contains
|
||||
document IDs as keys and the documents themselves as values.
|
||||
|
||||
:param storage: The class of the storage to use. Will be initialized
|
||||
with ``args`` and ``kwargs``.
|
||||
"""
|
||||
|
||||
#: The class that will be used to create table instances
|
||||
#:
|
||||
#: .. versionadded:: 4.0
|
||||
table_class = Table
|
||||
|
||||
#: The name of the default table
|
||||
#:
|
||||
#: .. versionadded:: 4.0
|
||||
default_table_name = '_default'
|
||||
|
||||
#: The class that will be used by default to create storage instances
|
||||
#:
|
||||
#: .. versionadded:: 4.0
|
||||
default_storage_class = JSONStorage
|
||||
|
||||
def __init__(self, *args, **kwargs) -> None:
|
||||
"""
|
||||
Create a new instance of TinyDB.
|
||||
"""
|
||||
|
||||
storage = kwargs.pop('storage', self.default_storage_class)
|
||||
|
||||
# Prepare the storage
|
||||
self._storage: Storage = storage(*args, **kwargs)
|
||||
|
||||
self._opened = True
|
||||
self._tables: Dict[str, Table] = {}
|
||||
|
||||
def __repr__(self):
|
||||
args = [
|
||||
'tables={}'.format(list(self.tables())),
|
||||
'tables_count={}'.format(len(self.tables())),
|
||||
'default_table_documents_count={}'.format(self.__len__()),
|
||||
'all_tables_documents_count={}'.format(
|
||||
['{}={}'.format(table, len(self.table(table)))
|
||||
for table in self.tables()]),
|
||||
]
|
||||
|
||||
return '<{} {}>'.format(type(self).__name__, ', '.join(args))
|
||||
|
||||
def table(self, name: str, **kwargs) -> Table:
|
||||
"""
|
||||
Get access to a specific table.
|
||||
|
||||
If the table hasn't been accessed yet, a new table instance will be
|
||||
created using the :attr:`~tinydb.database.TinyDB.table_class` class.
|
||||
Otherwise, the previously created table instance will be returned.
|
||||
|
||||
All further options besides the name are passed to the table class which
|
||||
by default is :class:`~tinydb.table.Table`. Check its documentation
|
||||
for further parameters you can pass.
|
||||
|
||||
:param name: The name of the table.
|
||||
:param kwargs: Keyword arguments to pass to the table class constructor
|
||||
"""
|
||||
|
||||
if name in self._tables:
|
||||
return self._tables[name]
|
||||
|
||||
table = self.table_class(self.storage, name, **kwargs)
|
||||
self._tables[name] = table
|
||||
|
||||
return table
|
||||
|
||||
def tables(self) -> Set[str]:
|
||||
"""
|
||||
Get the names of all tables in the database.
|
||||
|
||||
:returns: a set of table names
|
||||
"""
|
||||
|
||||
# TinyDB stores data as a dict of tables like this:
|
||||
#
|
||||
# {
|
||||
# '_default': {
|
||||
# 0: {document...},
|
||||
# 1: {document...},
|
||||
# },
|
||||
# 'table1': {
|
||||
# ...
|
||||
# }
|
||||
# }
|
||||
#
|
||||
# To get a set of table names, we thus construct a set of this main
|
||||
# dict which returns a set of the dict keys which are the table names.
|
||||
#
|
||||
# Storage.read() may return ``None`` if the database file is empty,
|
||||
# so we need to consider this case to and return an empty set in this
|
||||
# case.
|
||||
|
||||
return set(self.storage.read() or {})
|
||||
|
||||
def drop_tables(self) -> None:
|
||||
"""
|
||||
Drop all tables from the database. **CANNOT BE REVERSED!**
|
||||
"""
|
||||
|
||||
# We drop all tables from this database by writing an empty dict
|
||||
# to the storage thereby returning to the initial state with no tables.
|
||||
self.storage.write({})
|
||||
|
||||
# After that we need to remember to empty the ``_tables`` dict, so we'll
|
||||
# create new table instances when a table is accessed again.
|
||||
self._tables.clear()
|
||||
|
||||
def drop_table(self, name: str) -> None:
|
||||
"""
|
||||
Drop a specific table from the database. **CANNOT BE REVERSED!**
|
||||
|
||||
:param name: The name of the table to drop.
|
||||
"""
|
||||
|
||||
# If the table is currently opened, we need to forget the table class
|
||||
# instance
|
||||
if name in self._tables:
|
||||
del self._tables[name]
|
||||
|
||||
data = self.storage.read()
|
||||
|
||||
# The database is uninitialized, there's nothing to do
|
||||
if data is None:
|
||||
return
|
||||
|
||||
# The table does not exist, there's nothing to do
|
||||
if name not in data:
|
||||
return
|
||||
|
||||
# Remove the table from the data dict
|
||||
del data[name]
|
||||
|
||||
# Store the updated data back to the storage
|
||||
self.storage.write(data)
|
||||
|
||||
@property
|
||||
def storage(self) -> Storage:
|
||||
"""
|
||||
Get the storage instance used for this TinyDB instance.
|
||||
|
||||
:return: This instance's storage
|
||||
:rtype: Storage
|
||||
"""
|
||||
return self._storage
|
||||
|
||||
def close(self) -> None:
|
||||
"""
|
||||
Close the database.
|
||||
|
||||
This may be needed if the storage instance used for this database
|
||||
needs to perform cleanup operations like closing file handles.
|
||||
|
||||
To ensure this method is called, the TinyDB instance can be used as a
|
||||
context manager::
|
||||
|
||||
with TinyDB('data.json') as db:
|
||||
db.insert({'foo': 'bar'})
|
||||
|
||||
Upon leaving this context, the ``close`` method will be called.
|
||||
"""
|
||||
self._opened = False
|
||||
self.storage.close()
|
||||
|
||||
def __enter__(self):
|
||||
"""
|
||||
Use the database as a context manager.
|
||||
|
||||
Using the database as a context manager ensures that the
|
||||
:meth:`~tinydb.database.TinyDB.close` method is called upon leaving
|
||||
the context.
|
||||
|
||||
:return: The current instance
|
||||
"""
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
"""
|
||||
Close the storage instance when leaving a context.
|
||||
"""
|
||||
if self._opened:
|
||||
self.close()
|
||||
|
||||
def __getattr__(self, name):
|
||||
"""
|
||||
Forward all unknown attribute calls to the default table instance.
|
||||
"""
|
||||
return getattr(self.table(self.default_table_name), name)
|
||||
|
||||
# Here we forward magic methods to the default table instance. These are
|
||||
# not handled by __getattr__ so we need to forward them manually here
|
||||
|
||||
def __len__(self):
|
||||
"""
|
||||
Get the total number of documents in the default table.
|
||||
|
||||
>>> db = TinyDB('db.json')
|
||||
>>> len(db)
|
||||
0
|
||||
"""
|
||||
return len(self.table(self.default_table_name))
|
||||
|
||||
def __iter__(self) -> Iterator[Document]:
|
||||
"""
|
||||
Return an iterator for the default table's documents.
|
||||
"""
|
||||
return iter(self.table(self.default_table_name))
|
127
.env/lib/python3.10/site-packages/tinydb/middlewares.py
Normal file
127
.env/lib/python3.10/site-packages/tinydb/middlewares.py
Normal file
@ -0,0 +1,127 @@
|
||||
"""
|
||||
Contains the :class:`base class <tinydb.middlewares.Middleware>` for
|
||||
middlewares and implementations.
|
||||
"""
|
||||
from typing import Optional
|
||||
|
||||
from tinydb import Storage
|
||||
|
||||
|
||||
class Middleware:
|
||||
"""
|
||||
The base class for all Middlewares.
|
||||
|
||||
Middlewares hook into the read/write process of TinyDB allowing you to
|
||||
extend the behaviour by adding caching, logging, ...
|
||||
|
||||
Your middleware's ``__init__`` method has to call the parent class
|
||||
constructor so the middleware chain can be configured properly.
|
||||
"""
|
||||
|
||||
def __init__(self, storage_cls) -> None:
|
||||
self._storage_cls = storage_cls
|
||||
self.storage: Storage = None # type: ignore
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
"""
|
||||
Create the storage instance and store it as self.storage.
|
||||
|
||||
Usually a user creates a new TinyDB instance like this::
|
||||
|
||||
TinyDB(storage=StorageClass)
|
||||
|
||||
The storage keyword argument is used by TinyDB this way::
|
||||
|
||||
self.storage = storage(*args, **kwargs)
|
||||
|
||||
As we can see, ``storage(...)`` runs the constructor and returns the
|
||||
new storage instance.
|
||||
|
||||
|
||||
Using Middlewares, the user will call::
|
||||
|
||||
The 'real' storage class
|
||||
v
|
||||
TinyDB(storage=Middleware(StorageClass))
|
||||
^
|
||||
Already an instance!
|
||||
|
||||
So, when running ``self.storage = storage(*args, **kwargs)`` Python
|
||||
now will call ``__call__`` and TinyDB will expect the return value to
|
||||
be the storage (or Middleware) instance. Returning the instance is
|
||||
simple, but we also got the underlying (*real*) StorageClass as an
|
||||
__init__ argument that still is not an instance.
|
||||
So, we initialize it in __call__ forwarding any arguments we receive
|
||||
from TinyDB (``TinyDB(arg1, kwarg1=value, storage=...)``).
|
||||
|
||||
In case of nested Middlewares, calling the instance as if it was a
|
||||
class results in calling ``__call__`` what initializes the next
|
||||
nested Middleware that itself will initialize the next Middleware and
|
||||
so on.
|
||||
"""
|
||||
|
||||
self.storage = self._storage_cls(*args, **kwargs)
|
||||
|
||||
return self
|
||||
|
||||
def __getattr__(self, name):
|
||||
"""
|
||||
Forward all unknown attribute calls to the underlying storage, so we
|
||||
remain as transparent as possible.
|
||||
"""
|
||||
|
||||
return getattr(self.__dict__['storage'], name)
|
||||
|
||||
|
||||
class CachingMiddleware(Middleware):
|
||||
"""
|
||||
Add some caching to TinyDB.
|
||||
|
||||
This Middleware aims to improve the performance of TinyDB by writing only
|
||||
the last DB state every :attr:`WRITE_CACHE_SIZE` time and reading always
|
||||
from cache.
|
||||
"""
|
||||
|
||||
#: The number of write operations to cache before writing to disc
|
||||
WRITE_CACHE_SIZE = 1000
|
||||
|
||||
def __init__(self, storage_cls):
|
||||
# Initialize the parent constructor
|
||||
super().__init__(storage_cls)
|
||||
|
||||
# Prepare the cache
|
||||
self.cache = None
|
||||
self._cache_modified_count = 0
|
||||
|
||||
def read(self):
|
||||
if self.cache is None:
|
||||
# Empty cache: read from the storage
|
||||
self.cache = self.storage.read()
|
||||
|
||||
# Return the cached data
|
||||
return self.cache
|
||||
|
||||
def write(self, data):
|
||||
# Store data in cache
|
||||
self.cache = data
|
||||
self._cache_modified_count += 1
|
||||
|
||||
# Check if we need to flush the cache
|
||||
if self._cache_modified_count >= self.WRITE_CACHE_SIZE:
|
||||
self.flush()
|
||||
|
||||
def flush(self):
|
||||
"""
|
||||
Flush all unwritten data to disk.
|
||||
"""
|
||||
if self._cache_modified_count > 0:
|
||||
# Force-flush the cache by writing the data to the storage
|
||||
self.storage.write(self.cache)
|
||||
self._cache_modified_count = 0
|
||||
|
||||
def close(self):
|
||||
# Flush potentially unwritten data
|
||||
self.flush()
|
||||
|
||||
# Let the storage clean up too
|
||||
self.storage.close()
|
38
.env/lib/python3.10/site-packages/tinydb/mypy_plugin.py
Normal file
38
.env/lib/python3.10/site-packages/tinydb/mypy_plugin.py
Normal file
@ -0,0 +1,38 @@
|
||||
from typing import TypeVar, Optional, Callable, Dict
|
||||
|
||||
from mypy.nodes import NameExpr
|
||||
from mypy.options import Options
|
||||
from mypy.plugin import Plugin, DynamicClassDefContext
|
||||
|
||||
T = TypeVar('T')
|
||||
CB = Optional[Callable[[T], None]]
|
||||
DynamicClassDef = DynamicClassDefContext
|
||||
|
||||
|
||||
class TinyDBPlugin(Plugin):
|
||||
def __init__(self, options: Options):
|
||||
super().__init__(options)
|
||||
|
||||
self.named_placeholders: Dict[str, str] = {}
|
||||
|
||||
def get_dynamic_class_hook(self, fullname: str) -> CB[DynamicClassDef]:
|
||||
if fullname == 'tinydb.utils.with_typehint':
|
||||
def hook(ctx: DynamicClassDefContext):
|
||||
klass = ctx.call.args[0]
|
||||
assert isinstance(klass, NameExpr)
|
||||
|
||||
type_name = klass.fullname
|
||||
assert type_name is not None
|
||||
|
||||
qualified = self.lookup_fully_qualified(type_name)
|
||||
assert qualified is not None
|
||||
|
||||
ctx.api.add_symbol_table_node(ctx.name, qualified)
|
||||
|
||||
return hook
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def plugin(_version: str):
|
||||
return TinyDBPlugin
|
69
.env/lib/python3.10/site-packages/tinydb/operations.py
Normal file
69
.env/lib/python3.10/site-packages/tinydb/operations.py
Normal file
@ -0,0 +1,69 @@
|
||||
"""
|
||||
A collection of update operations for TinyDB.
|
||||
|
||||
They are used for updates like this:
|
||||
|
||||
>>> db.update(delete('foo'), where('foo') == 2)
|
||||
|
||||
This would delete the ``foo`` field from all documents where ``foo`` equals 2.
|
||||
"""
|
||||
|
||||
|
||||
def delete(field):
|
||||
"""
|
||||
Delete a given field from the document.
|
||||
"""
|
||||
def transform(doc):
|
||||
del doc[field]
|
||||
|
||||
return transform
|
||||
|
||||
|
||||
def add(field, n):
|
||||
"""
|
||||
Add ``n`` to a given field in the document.
|
||||
"""
|
||||
def transform(doc):
|
||||
doc[field] += n
|
||||
|
||||
return transform
|
||||
|
||||
|
||||
def subtract(field, n):
|
||||
"""
|
||||
Subtract ``n`` to a given field in the document.
|
||||
"""
|
||||
def transform(doc):
|
||||
doc[field] -= n
|
||||
|
||||
return transform
|
||||
|
||||
|
||||
def set(field, val):
|
||||
"""
|
||||
Set a given field to ``val``.
|
||||
"""
|
||||
def transform(doc):
|
||||
doc[field] = val
|
||||
|
||||
return transform
|
||||
|
||||
|
||||
def increment(field):
|
||||
"""
|
||||
Increment a given field in the document by 1.
|
||||
"""
|
||||
def transform(doc):
|
||||
doc[field] += 1
|
||||
|
||||
return transform
|
||||
|
||||
|
||||
def decrement(field):
|
||||
"""
|
||||
Decrement a given field in the document by 1.
|
||||
"""
|
||||
def transform(doc):
|
||||
doc[field] -= 1
|
||||
|
||||
return transform
|
0
.env/lib/python3.10/site-packages/tinydb/py.typed
Normal file
0
.env/lib/python3.10/site-packages/tinydb/py.typed
Normal file
526
.env/lib/python3.10/site-packages/tinydb/queries.py
Normal file
526
.env/lib/python3.10/site-packages/tinydb/queries.py
Normal file
@ -0,0 +1,526 @@
|
||||
"""
|
||||
Contains the querying interface.
|
||||
|
||||
Starting with :class:`~tinydb.queries.Query` you can construct complex
|
||||
queries:
|
||||
|
||||
>>> ((where('f1') == 5) & (where('f2') != 2)) | where('s').matches(r'^\\w+$')
|
||||
(('f1' == 5) and ('f2' != 2)) or ('s' ~= ^\\w+$ )
|
||||
|
||||
Queries are executed by using the ``__call__``:
|
||||
|
||||
>>> q = where('val') == 5
|
||||
>>> q({'val': 5})
|
||||
True
|
||||
>>> q({'val': 1})
|
||||
False
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
from typing import Mapping, Tuple, Callable, Any, Union, List, Optional
|
||||
|
||||
from .utils import freeze
|
||||
|
||||
if sys.version_info >= (3, 8):
|
||||
from typing import Protocol
|
||||
else:
|
||||
from typing_extensions import Protocol
|
||||
|
||||
__all__ = ('Query', 'QueryLike', 'where')
|
||||
|
||||
|
||||
def is_sequence(obj):
|
||||
return hasattr(obj, '__iter__')
|
||||
|
||||
|
||||
class QueryLike(Protocol):
|
||||
"""
|
||||
A typing protocol that acts like a query.
|
||||
|
||||
Something that we use as a query must have two properties:
|
||||
|
||||
1. It must be callable, accepting a `Mapping` object and returning a
|
||||
boolean that indicates whether the value matches the query, and
|
||||
2. it must have a stable hash that will be used for query caching.
|
||||
|
||||
In addition, to mark a query as non-cacheable (e.g. if it involves
|
||||
some remote lookup) it needs to have a method called ``is_cacheable``
|
||||
that returns ``False``.
|
||||
|
||||
This query protocol is used to make MyPy correctly support the query
|
||||
pattern that TinyDB uses.
|
||||
|
||||
See also https://mypy.readthedocs.io/en/stable/protocols.html#simple-user-defined-protocols
|
||||
"""
|
||||
def __call__(self, value: Mapping) -> bool: ...
|
||||
|
||||
def __hash__(self) -> int: ...
|
||||
|
||||
|
||||
class QueryInstance:
|
||||
"""
|
||||
A query instance.
|
||||
|
||||
This is the object on which the actual query operations are performed. The
|
||||
:class:`~tinydb.queries.Query` class acts like a query builder and
|
||||
generates :class:`~tinydb.queries.QueryInstance` objects which will
|
||||
evaluate their query against a given document when called.
|
||||
|
||||
Query instances can be combined using logical OR and AND and inverted using
|
||||
logical NOT.
|
||||
|
||||
In order to be usable in a query cache, a query needs to have a stable hash
|
||||
value with the same query always returning the same hash. That way a query
|
||||
instance can be used as a key in a dictionary.
|
||||
"""
|
||||
|
||||
def __init__(self, test: Callable[[Mapping], bool], hashval: Optional[Tuple]):
|
||||
self._test = test
|
||||
self._hash = hashval
|
||||
|
||||
def is_cacheable(self) -> bool:
|
||||
return self._hash is not None
|
||||
|
||||
def __call__(self, value: Mapping) -> bool:
|
||||
"""
|
||||
Evaluate the query to check if it matches a specified value.
|
||||
|
||||
:param value: The value to check.
|
||||
:return: Whether the value matches this query.
|
||||
"""
|
||||
return self._test(value)
|
||||
|
||||
def __hash__(self) -> int:
|
||||
# We calculate the query hash by using the ``hashval`` object which
|
||||
# describes this query uniquely, so we can calculate a stable hash
|
||||
# value by simply hashing it
|
||||
return hash(self._hash)
|
||||
|
||||
def __repr__(self):
|
||||
return 'QueryImpl{}'.format(self._hash)
|
||||
|
||||
def __eq__(self, other: object):
|
||||
if isinstance(other, QueryInstance):
|
||||
return self._hash == other._hash
|
||||
|
||||
return False
|
||||
|
||||
# --- Query modifiers -----------------------------------------------------
|
||||
|
||||
def __and__(self, other: 'QueryInstance') -> 'QueryInstance':
|
||||
# We use a frozenset for the hash as the AND operation is commutative
|
||||
# (a & b == b & a) and the frozenset does not consider the order of
|
||||
# elements
|
||||
if self.is_cacheable() and other.is_cacheable():
|
||||
hashval = ('and', frozenset([self._hash, other._hash]))
|
||||
else:
|
||||
hashval = None
|
||||
return QueryInstance(lambda value: self(value) and other(value), hashval)
|
||||
|
||||
def __or__(self, other: 'QueryInstance') -> 'QueryInstance':
|
||||
# We use a frozenset for the hash as the OR operation is commutative
|
||||
# (a | b == b | a) and the frozenset does not consider the order of
|
||||
# elements
|
||||
if self.is_cacheable() and other.is_cacheable():
|
||||
hashval = ('or', frozenset([self._hash, other._hash]))
|
||||
else:
|
||||
hashval = None
|
||||
return QueryInstance(lambda value: self(value) or other(value), hashval)
|
||||
|
||||
def __invert__(self) -> 'QueryInstance':
|
||||
hashval = ('not', self._hash) if self.is_cacheable() else None
|
||||
return QueryInstance(lambda value: not self(value), hashval)
|
||||
|
||||
|
||||
class Query(QueryInstance):
|
||||
"""
|
||||
TinyDB Queries.
|
||||
|
||||
Allows building queries for TinyDB databases. There are two main ways of
|
||||
using queries:
|
||||
|
||||
1) ORM-like usage:
|
||||
|
||||
>>> User = Query()
|
||||
>>> db.search(User.name == 'John Doe')
|
||||
>>> db.search(User['logged-in'] == True)
|
||||
|
||||
2) Classical usage:
|
||||
|
||||
>>> db.search(where('value') == True)
|
||||
|
||||
Note that ``where(...)`` is a shorthand for ``Query(...)`` allowing for
|
||||
a more fluent syntax.
|
||||
|
||||
Besides the methods documented here you can combine queries using the
|
||||
binary AND and OR operators:
|
||||
|
||||
>>> # Binary AND:
|
||||
>>> db.search((where('field1').exists()) & (where('field2') == 5))
|
||||
>>> # Binary OR:
|
||||
>>> db.search((where('field1').exists()) | (where('field2') == 5))
|
||||
|
||||
Queries are executed by calling the resulting object. They expect to get
|
||||
the document to test as the first argument and return ``True`` or
|
||||
``False`` depending on whether the documents match the query or not.
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
# The current path of fields to access when evaluating the object
|
||||
self._path: Tuple[Union[str, Callable], ...] = ()
|
||||
|
||||
# Prevent empty queries to be evaluated
|
||||
def notest(_):
|
||||
raise RuntimeError('Empty query was evaluated')
|
||||
|
||||
super().__init__(
|
||||
test=notest,
|
||||
hashval=(None,)
|
||||
)
|
||||
|
||||
def __repr__(self):
|
||||
return '{}()'.format(type(self).__name__)
|
||||
|
||||
def __hash__(self):
|
||||
return super().__hash__()
|
||||
|
||||
def __getattr__(self, item: str):
|
||||
# Generate a new query object with the new query path
|
||||
# We use type(self) to get the class of the current query in case
|
||||
# someone uses a subclass of ``Query``
|
||||
query = type(self)()
|
||||
|
||||
# Now we add the accessed item to the query path ...
|
||||
query._path = self._path + (item,)
|
||||
|
||||
# ... and update the query hash
|
||||
query._hash = ('path', query._path) if self.is_cacheable() else None
|
||||
|
||||
return query
|
||||
|
||||
def __getitem__(self, item: str):
|
||||
# A different syntax for ``__getattr__``
|
||||
|
||||
# We cannot call ``getattr(item)`` here as it would try to resolve
|
||||
# the name as a method name first, only then call our ``__getattr__``
|
||||
# method. By calling ``__getattr__`` directly, we make sure that
|
||||
# calling e.g. ``Query()['test']`` will always generate a query for a
|
||||
# document's ``test`` field instead of returning a reference to the
|
||||
# ``Query.test`` method
|
||||
return self.__getattr__(item)
|
||||
|
||||
def _generate_test(
|
||||
self,
|
||||
test: Callable[[Any], bool],
|
||||
hashval: Tuple,
|
||||
allow_empty_path: bool = False
|
||||
) -> QueryInstance:
|
||||
"""
|
||||
Generate a query based on a test function that first resolves the query
|
||||
path.
|
||||
|
||||
:param test: The test the query executes.
|
||||
:param hashval: The hash of the query.
|
||||
:return: A :class:`~tinydb.queries.QueryInstance` object
|
||||
"""
|
||||
if not self._path and not allow_empty_path:
|
||||
raise ValueError('Query has no path')
|
||||
|
||||
def runner(value):
|
||||
try:
|
||||
# Resolve the path
|
||||
for part in self._path:
|
||||
if isinstance(part, str):
|
||||
value = value[part]
|
||||
else:
|
||||
value = part(value)
|
||||
except (KeyError, TypeError):
|
||||
return False
|
||||
else:
|
||||
# Perform the specified test
|
||||
return test(value)
|
||||
|
||||
return QueryInstance(
|
||||
lambda value: runner(value),
|
||||
(hashval if self.is_cacheable() else None)
|
||||
)
|
||||
|
||||
def __eq__(self, rhs: Any):
|
||||
"""
|
||||
Test a dict value for equality.
|
||||
|
||||
>>> Query().f1 == 42
|
||||
|
||||
:param rhs: The value to compare against
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: value == rhs,
|
||||
('==', self._path, freeze(rhs))
|
||||
)
|
||||
|
||||
def __ne__(self, rhs: Any):
|
||||
"""
|
||||
Test a dict value for inequality.
|
||||
|
||||
>>> Query().f1 != 42
|
||||
|
||||
:param rhs: The value to compare against
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: value != rhs,
|
||||
('!=', self._path, freeze(rhs))
|
||||
)
|
||||
|
||||
def __lt__(self, rhs: Any) -> QueryInstance:
|
||||
"""
|
||||
Test a dict value for being lower than another value.
|
||||
|
||||
>>> Query().f1 < 42
|
||||
|
||||
:param rhs: The value to compare against
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: value < rhs,
|
||||
('<', self._path, rhs)
|
||||
)
|
||||
|
||||
def __le__(self, rhs: Any) -> QueryInstance:
|
||||
"""
|
||||
Test a dict value for being lower than or equal to another value.
|
||||
|
||||
>>> where('f1') <= 42
|
||||
|
||||
:param rhs: The value to compare against
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: value <= rhs,
|
||||
('<=', self._path, rhs)
|
||||
)
|
||||
|
||||
def __gt__(self, rhs: Any) -> QueryInstance:
|
||||
"""
|
||||
Test a dict value for being greater than another value.
|
||||
|
||||
>>> Query().f1 > 42
|
||||
|
||||
:param rhs: The value to compare against
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: value > rhs,
|
||||
('>', self._path, rhs)
|
||||
)
|
||||
|
||||
def __ge__(self, rhs: Any) -> QueryInstance:
|
||||
"""
|
||||
Test a dict value for being greater than or equal to another value.
|
||||
|
||||
>>> Query().f1 >= 42
|
||||
|
||||
:param rhs: The value to compare against
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: value >= rhs,
|
||||
('>=', self._path, rhs)
|
||||
)
|
||||
|
||||
def exists(self) -> QueryInstance:
|
||||
"""
|
||||
Test for a dict where a provided key exists.
|
||||
|
||||
>>> Query().f1.exists()
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda _: True,
|
||||
('exists', self._path)
|
||||
)
|
||||
|
||||
def matches(self, regex: str, flags: int = 0) -> QueryInstance:
|
||||
"""
|
||||
Run a regex test against a dict value (whole string has to match).
|
||||
|
||||
>>> Query().f1.matches(r'^\\w+$')
|
||||
|
||||
:param regex: The regular expression to use for matching
|
||||
:param flags: regex flags to pass to ``re.match``
|
||||
"""
|
||||
def test(value):
|
||||
if not isinstance(value, str):
|
||||
return False
|
||||
|
||||
return re.match(regex, value, flags) is not None
|
||||
|
||||
return self._generate_test(test, ('matches', self._path, regex))
|
||||
|
||||
def search(self, regex: str, flags: int = 0) -> QueryInstance:
|
||||
"""
|
||||
Run a regex test against a dict value (only substring string has to
|
||||
match).
|
||||
|
||||
>>> Query().f1.search(r'^\\w+$')
|
||||
|
||||
:param regex: The regular expression to use for matching
|
||||
:param flags: regex flags to pass to ``re.match``
|
||||
"""
|
||||
|
||||
def test(value):
|
||||
if not isinstance(value, str):
|
||||
return False
|
||||
|
||||
return re.search(regex, value, flags) is not None
|
||||
|
||||
return self._generate_test(test, ('search', self._path, regex))
|
||||
|
||||
def test(self, func: Callable[[Mapping], bool], *args) -> QueryInstance:
|
||||
"""
|
||||
Run a user-defined test function against a dict value.
|
||||
|
||||
>>> def test_func(val):
|
||||
... return val == 42
|
||||
...
|
||||
>>> Query().f1.test(test_func)
|
||||
|
||||
.. warning::
|
||||
|
||||
The test function provided needs to be deterministic (returning the
|
||||
same value when provided with the same arguments), otherwise this
|
||||
may mess up the query cache that :class:`~tinydb.table.Table`
|
||||
implements.
|
||||
|
||||
:param func: The function to call, passing the dict as the first
|
||||
argument
|
||||
:param args: Additional arguments to pass to the test function
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: func(value, *args),
|
||||
('test', self._path, func, args)
|
||||
)
|
||||
|
||||
def any(self, cond: Union[QueryInstance, List[Any]]) -> QueryInstance:
|
||||
"""
|
||||
Check if a condition is met by any document in a list,
|
||||
where a condition can also be a sequence (e.g. list).
|
||||
|
||||
>>> Query().f1.any(Query().f2 == 1)
|
||||
|
||||
Matches::
|
||||
|
||||
{'f1': [{'f2': 1}, {'f2': 0}]}
|
||||
|
||||
>>> Query().f1.any([1, 2, 3])
|
||||
|
||||
Matches::
|
||||
|
||||
{'f1': [1, 2]}
|
||||
{'f1': [3, 4, 5]}
|
||||
|
||||
:param cond: Either a query that at least one document has to match or
|
||||
a list of which at least one document has to be contained
|
||||
in the tested document.
|
||||
"""
|
||||
if callable(cond):
|
||||
def test(value):
|
||||
return is_sequence(value) and any(cond(e) for e in value)
|
||||
|
||||
else:
|
||||
def test(value):
|
||||
return is_sequence(value) and any(e in cond for e in value)
|
||||
|
||||
return self._generate_test(
|
||||
lambda value: test(value),
|
||||
('any', self._path, freeze(cond))
|
||||
)
|
||||
|
||||
def all(self, cond: Union['QueryInstance', List[Any]]) -> QueryInstance:
|
||||
"""
|
||||
Check if a condition is met by all documents in a list,
|
||||
where a condition can also be a sequence (e.g. list).
|
||||
|
||||
>>> Query().f1.all(Query().f2 == 1)
|
||||
|
||||
Matches::
|
||||
|
||||
{'f1': [{'f2': 1}, {'f2': 1}]}
|
||||
|
||||
>>> Query().f1.all([1, 2, 3])
|
||||
|
||||
Matches::
|
||||
|
||||
{'f1': [1, 2, 3, 4, 5]}
|
||||
|
||||
:param cond: Either a query that all documents have to match or a list
|
||||
which has to be contained in the tested document.
|
||||
"""
|
||||
if callable(cond):
|
||||
def test(value):
|
||||
return is_sequence(value) and all(cond(e) for e in value)
|
||||
|
||||
else:
|
||||
def test(value):
|
||||
return is_sequence(value) and all(e in value for e in cond)
|
||||
|
||||
return self._generate_test(
|
||||
lambda value: test(value),
|
||||
('all', self._path, freeze(cond))
|
||||
)
|
||||
|
||||
def one_of(self, items: List[Any]) -> QueryInstance:
|
||||
"""
|
||||
Check if the value is contained in a list or generator.
|
||||
|
||||
>>> Query().f1.one_of(['value 1', 'value 2'])
|
||||
|
||||
:param items: The list of items to check with
|
||||
"""
|
||||
return self._generate_test(
|
||||
lambda value: value in items,
|
||||
('one_of', self._path, freeze(items))
|
||||
)
|
||||
|
||||
def fragment(self, document: Mapping) -> QueryInstance:
|
||||
def test(value):
|
||||
for key in document:
|
||||
if key not in value or value[key] != document[key]:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
return self._generate_test(
|
||||
lambda value: test(value),
|
||||
('fragment', freeze(document)),
|
||||
allow_empty_path=True
|
||||
)
|
||||
|
||||
def noop(self) -> QueryInstance:
|
||||
"""
|
||||
Always evaluate to ``True``.
|
||||
|
||||
Useful for having a base value when composing queries dynamically.
|
||||
"""
|
||||
|
||||
return QueryInstance(
|
||||
lambda value: True,
|
||||
()
|
||||
)
|
||||
|
||||
def map(self, fn: Callable[[Any], Any]) -> 'Query':
|
||||
"""
|
||||
Add a function to the query path. Similar to __getattr__ but for
|
||||
arbitrary functions.
|
||||
"""
|
||||
query = type(self)()
|
||||
|
||||
# Now we add the callable to the query path ...
|
||||
query._path = self._path + (fn,)
|
||||
|
||||
# ... and kill the hash - callable objects can be mutable, so it's
|
||||
# harmful to cache their results.
|
||||
query._hash = None
|
||||
|
||||
return query
|
||||
|
||||
def where(key: str) -> Query:
|
||||
"""
|
||||
A shorthand for ``Query()[key]``
|
||||
"""
|
||||
return Query()[key]
|
166
.env/lib/python3.10/site-packages/tinydb/storages.py
Normal file
166
.env/lib/python3.10/site-packages/tinydb/storages.py
Normal file
@ -0,0 +1,166 @@
|
||||
"""
|
||||
Contains the :class:`base class <tinydb.storages.Storage>` for storages and
|
||||
implementations.
|
||||
"""
|
||||
|
||||
import io
|
||||
import json
|
||||
import os
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
__all__ = ('Storage', 'JSONStorage', 'MemoryStorage')
|
||||
|
||||
|
||||
def touch(path: str, create_dirs: bool):
|
||||
"""
|
||||
Create a file if it doesn't exist yet.
|
||||
|
||||
:param path: The file to create.
|
||||
:param create_dirs: Whether to create all missing parent directories.
|
||||
"""
|
||||
if create_dirs:
|
||||
base_dir = os.path.dirname(path)
|
||||
|
||||
# Check if we need to create missing parent directories
|
||||
if not os.path.exists(base_dir):
|
||||
os.makedirs(base_dir)
|
||||
|
||||
# Create the file by opening it in 'a' mode which creates the file if it
|
||||
# does not exist yet but does not modify its contents
|
||||
with open(path, 'a'):
|
||||
pass
|
||||
|
||||
|
||||
class Storage(ABC):
|
||||
"""
|
||||
The abstract base class for all Storages.
|
||||
|
||||
A Storage (de)serializes the current state of the database and stores it in
|
||||
some place (memory, file on disk, ...).
|
||||
"""
|
||||
|
||||
# Using ABCMeta as metaclass allows instantiating only storages that have
|
||||
# implemented read and write
|
||||
|
||||
@abstractmethod
|
||||
def read(self) -> Optional[Dict[str, Dict[str, Any]]]:
|
||||
"""
|
||||
Read the current state.
|
||||
|
||||
Any kind of deserialization should go here.
|
||||
|
||||
Return ``None`` here to indicate that the storage is empty.
|
||||
"""
|
||||
|
||||
raise NotImplementedError('To be overridden!')
|
||||
|
||||
@abstractmethod
|
||||
def write(self, data: Dict[str, Dict[str, Any]]) -> None:
|
||||
"""
|
||||
Write the current state of the database to the storage.
|
||||
|
||||
Any kind of serialization should go here.
|
||||
|
||||
:param data: The current state of the database.
|
||||
"""
|
||||
|
||||
raise NotImplementedError('To be overridden!')
|
||||
|
||||
def close(self) -> None:
|
||||
"""
|
||||
Optional: Close open file handles, etc.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class JSONStorage(Storage):
|
||||
"""
|
||||
Store the data in a JSON file.
|
||||
"""
|
||||
|
||||
def __init__(self, path: str, create_dirs=False, encoding=None, access_mode='r+', **kwargs):
|
||||
"""
|
||||
Create a new instance.
|
||||
|
||||
Also creates the storage file, if it doesn't exist and the access mode is appropriate for writing.
|
||||
|
||||
:param path: Where to store the JSON data.
|
||||
:param access_mode: mode in which the file is opened (r, r+, w, a, x, b, t, +, U)
|
||||
:type access_mode: str
|
||||
"""
|
||||
|
||||
super().__init__()
|
||||
|
||||
self._mode = access_mode
|
||||
self.kwargs = kwargs
|
||||
|
||||
# Create the file if it doesn't exist and creating is allowed by the
|
||||
# access mode
|
||||
if any([character in self._mode for character in ('+', 'w', 'a')]): # any of the writing modes
|
||||
touch(path, create_dirs=create_dirs)
|
||||
|
||||
# Open the file for reading/writing
|
||||
self._handle = open(path, mode=self._mode, encoding=encoding)
|
||||
|
||||
def close(self) -> None:
|
||||
self._handle.close()
|
||||
|
||||
def read(self) -> Optional[Dict[str, Dict[str, Any]]]:
|
||||
# Get the file size by moving the cursor to the file end and reading
|
||||
# its location
|
||||
self._handle.seek(0, os.SEEK_END)
|
||||
size = self._handle.tell()
|
||||
|
||||
if not size:
|
||||
# File is empty, so we return ``None`` so TinyDB can properly
|
||||
# initialize the database
|
||||
return None
|
||||
else:
|
||||
# Return the cursor to the beginning of the file
|
||||
self._handle.seek(0)
|
||||
|
||||
# Load the JSON contents of the file
|
||||
return json.load(self._handle)
|
||||
|
||||
def write(self, data: Dict[str, Dict[str, Any]]):
|
||||
# Move the cursor to the beginning of the file just in case
|
||||
self._handle.seek(0)
|
||||
|
||||
# Serialize the database state using the user-provided arguments
|
||||
serialized = json.dumps(data, **self.kwargs)
|
||||
|
||||
# Write the serialized data to the file
|
||||
try:
|
||||
self._handle.write(serialized)
|
||||
except io.UnsupportedOperation:
|
||||
raise IOError('Cannot write to the database. Access mode is "{0}"'.format(self._mode))
|
||||
|
||||
# Ensure the file has been written
|
||||
self._handle.flush()
|
||||
os.fsync(self._handle.fileno())
|
||||
|
||||
# Remove data that is behind the new cursor in case the file has
|
||||
# gotten shorter
|
||||
self._handle.truncate()
|
||||
|
||||
|
||||
class MemoryStorage(Storage):
|
||||
"""
|
||||
Store the data as JSON in memory.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Create a new instance.
|
||||
"""
|
||||
|
||||
super().__init__()
|
||||
self.memory = None
|
||||
|
||||
def read(self) -> Optional[Dict[str, Dict[str, Any]]]:
|
||||
return self.memory
|
||||
|
||||
def write(self, data: Dict[str, Dict[str, Any]]):
|
||||
self.memory = data
|
750
.env/lib/python3.10/site-packages/tinydb/table.py
Normal file
750
.env/lib/python3.10/site-packages/tinydb/table.py
Normal file
@ -0,0 +1,750 @@
|
||||
"""
|
||||
This module implements tables, the central place for accessing and manipulating
|
||||
data in TinyDB.
|
||||
"""
|
||||
|
||||
from typing import (
|
||||
Callable,
|
||||
Dict,
|
||||
Iterable,
|
||||
Iterator,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Union,
|
||||
cast,
|
||||
Tuple
|
||||
)
|
||||
|
||||
from .queries import QueryLike
|
||||
from .storages import Storage
|
||||
from .utils import LRUCache
|
||||
|
||||
__all__ = ('Document', 'Table')
|
||||
|
||||
|
||||
class Document(dict):
|
||||
"""
|
||||
A document stored in the database.
|
||||
|
||||
This class provides a way to access both a document's content and
|
||||
its ID using ``doc.doc_id``.
|
||||
"""
|
||||
|
||||
def __init__(self, value: Mapping, doc_id: int):
|
||||
super().__init__(value)
|
||||
self.doc_id = doc_id
|
||||
|
||||
|
||||
class Table:
|
||||
"""
|
||||
Represents a single TinyDB table.
|
||||
|
||||
It provides methods for accessing and manipulating documents.
|
||||
|
||||
.. admonition:: Query Cache
|
||||
|
||||
As an optimization, a query cache is implemented using a
|
||||
:class:`~tinydb.utils.LRUCache`. This class mimics the interface of
|
||||
a normal ``dict``, but starts to remove the least-recently used entries
|
||||
once a threshold is reached.
|
||||
|
||||
The query cache is updated on every search operation. When writing
|
||||
data, the whole cache is discarded as the query results may have
|
||||
changed.
|
||||
|
||||
.. admonition:: Customization
|
||||
|
||||
For customization, the following class variables can be set:
|
||||
|
||||
- ``document_class`` defines the class that is used to represent
|
||||
documents,
|
||||
- ``document_id_class`` defines the class that is used to represent
|
||||
document IDs,
|
||||
- ``query_cache_class`` defines the class that is used for the query
|
||||
cache
|
||||
- ``default_query_cache_capacity`` defines the default capacity of
|
||||
the query cache
|
||||
|
||||
.. versionadded:: 4.0
|
||||
|
||||
|
||||
:param storage: The storage instance to use for this table
|
||||
:param name: The table name
|
||||
:param cache_size: Maximum capacity of query cache
|
||||
"""
|
||||
|
||||
#: The class used to represent documents
|
||||
#:
|
||||
#: .. versionadded:: 4.0
|
||||
document_class = Document
|
||||
|
||||
#: The class used to represent a document ID
|
||||
#:
|
||||
#: .. versionadded:: 4.0
|
||||
document_id_class = int
|
||||
|
||||
#: The class used for caching query results
|
||||
#:
|
||||
#: .. versionadded:: 4.0
|
||||
query_cache_class = LRUCache
|
||||
|
||||
#: The default capacity of the query cache
|
||||
#:
|
||||
#: .. versionadded:: 4.0
|
||||
default_query_cache_capacity = 10
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
storage: Storage,
|
||||
name: str,
|
||||
cache_size: int = default_query_cache_capacity
|
||||
):
|
||||
"""
|
||||
Create a table instance.
|
||||
"""
|
||||
|
||||
self._storage = storage
|
||||
self._name = name
|
||||
self._query_cache: LRUCache[QueryLike, List[Document]] \
|
||||
= self.query_cache_class(capacity=cache_size)
|
||||
|
||||
self._next_id = None
|
||||
|
||||
def __repr__(self):
|
||||
args = [
|
||||
'name={!r}'.format(self.name),
|
||||
'total={}'.format(len(self)),
|
||||
'storage={}'.format(self._storage),
|
||||
]
|
||||
|
||||
return '<{} {}>'.format(type(self).__name__, ', '.join(args))
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
"""
|
||||
Get the table name.
|
||||
"""
|
||||
return self._name
|
||||
|
||||
@property
|
||||
def storage(self) -> Storage:
|
||||
"""
|
||||
Get the table storage instance.
|
||||
"""
|
||||
return self._storage
|
||||
|
||||
def insert(self, document: Mapping) -> int:
|
||||
"""
|
||||
Insert a new document into the table.
|
||||
|
||||
:param document: the document to insert
|
||||
:returns: the inserted document's ID
|
||||
"""
|
||||
|
||||
# Make sure the document implements the ``Mapping`` interface
|
||||
if not isinstance(document, Mapping):
|
||||
raise ValueError('Document is not a Mapping')
|
||||
|
||||
# First, we get the document ID for the new document
|
||||
if isinstance(document, Document):
|
||||
# For a `Document` object we use the specified ID
|
||||
doc_id = document.doc_id
|
||||
|
||||
# We also reset the stored next ID so the next insert won't
|
||||
# re-use document IDs by accident when storing an old value
|
||||
self._next_id = None
|
||||
else:
|
||||
# In all other cases we use the next free ID
|
||||
doc_id = self._get_next_id()
|
||||
|
||||
# Now, we update the table and add the document
|
||||
def updater(table: dict):
|
||||
if doc_id in table:
|
||||
raise ValueError(f'Document with ID {str(doc_id)} '
|
||||
f'already exists')
|
||||
|
||||
# By calling ``dict(document)`` we convert the data we got to a
|
||||
# ``dict`` instance even if it was a different class that
|
||||
# implemented the ``Mapping`` interface
|
||||
table[doc_id] = dict(document)
|
||||
|
||||
# See below for details on ``Table._update``
|
||||
self._update_table(updater)
|
||||
|
||||
return doc_id
|
||||
|
||||
def insert_multiple(self, documents: Iterable[Mapping]) -> List[int]:
|
||||
"""
|
||||
Insert multiple documents into the table.
|
||||
|
||||
:param documents: an Iterable of documents to insert
|
||||
:returns: a list containing the inserted documents' IDs
|
||||
"""
|
||||
doc_ids = []
|
||||
|
||||
def updater(table: dict):
|
||||
for document in documents:
|
||||
|
||||
# Make sure the document implements the ``Mapping`` interface
|
||||
if not isinstance(document, Mapping):
|
||||
raise ValueError('Document is not a Mapping')
|
||||
|
||||
if isinstance(document, Document):
|
||||
# Check if document does not override an existing document
|
||||
if document.doc_id in table:
|
||||
raise ValueError(
|
||||
f'Document with ID {str(document.doc_id)} '
|
||||
f'already exists'
|
||||
)
|
||||
|
||||
# Store the doc_id, so we can return all document IDs
|
||||
# later. Then save the document with its doc_id and
|
||||
# skip the rest of the current loop
|
||||
doc_id = document.doc_id
|
||||
doc_ids.append(doc_id)
|
||||
table[doc_id] = dict(document)
|
||||
continue
|
||||
|
||||
# Generate new document ID for this document
|
||||
# Store the doc_id, so we can return all document IDs
|
||||
# later, then save the document with the new doc_id
|
||||
doc_id = self._get_next_id()
|
||||
doc_ids.append(doc_id)
|
||||
table[doc_id] = dict(document)
|
||||
|
||||
# See below for details on ``Table._update``
|
||||
self._update_table(updater)
|
||||
|
||||
return doc_ids
|
||||
|
||||
def all(self) -> List[Document]:
|
||||
"""
|
||||
Get all documents stored in the table.
|
||||
|
||||
:returns: a list with all documents.
|
||||
"""
|
||||
|
||||
# iter(self) (implemented in Table.__iter__ provides an iterator
|
||||
# that returns all documents in this table. We use it to get a list
|
||||
# of all documents by using the ``list`` constructor to perform the
|
||||
# conversion.
|
||||
|
||||
return list(iter(self))
|
||||
|
||||
def search(self, cond: QueryLike) -> List[Document]:
|
||||
"""
|
||||
Search for all documents matching a 'where' cond.
|
||||
|
||||
:param cond: the condition to check against
|
||||
:returns: list of matching documents
|
||||
"""
|
||||
|
||||
# First, we check the query cache to see if it has results for this
|
||||
# query
|
||||
cached_results = self._query_cache.get(cond)
|
||||
if cached_results is not None:
|
||||
return cached_results[:]
|
||||
|
||||
# Perform the search by applying the query to all documents.
|
||||
# Then, only if the document matches the query, convert it
|
||||
# to the document class and document ID class.
|
||||
docs = [
|
||||
self.document_class(doc, self.document_id_class(doc_id))
|
||||
for doc_id, doc in self._read_table().items()
|
||||
if cond(doc)
|
||||
]
|
||||
|
||||
# Only cache cacheable queries.
|
||||
#
|
||||
# This weird `getattr` dance is needed to make MyPy happy as
|
||||
# it doesn't know that a query might have a `is_cacheable` method
|
||||
# that is not declared in the `QueryLike` protocol due to it being
|
||||
# optional.
|
||||
# See: https://github.com/python/mypy/issues/1424
|
||||
#
|
||||
# Note also that by default we expect custom query objects to be
|
||||
# cacheable (which means they need to have a stable hash value).
|
||||
# This is to keep consistency with TinyDB's behavior before
|
||||
# `is_cacheable` was introduced which assumed that all queries
|
||||
# are cacheable.
|
||||
is_cacheable: Callable[[], bool] = getattr(cond, 'is_cacheable',
|
||||
lambda: True)
|
||||
if is_cacheable():
|
||||
# Update the query cache
|
||||
self._query_cache[cond] = docs[:]
|
||||
|
||||
return docs
|
||||
|
||||
def get(
|
||||
self,
|
||||
cond: Optional[QueryLike] = None,
|
||||
doc_id: Optional[int] = None,
|
||||
) -> Optional[Document]:
|
||||
"""
|
||||
Get exactly one document specified by a query or a document ID.
|
||||
|
||||
Returns ``None`` if the document doesn't exist.
|
||||
|
||||
:param cond: the condition to check against
|
||||
:param doc_id: the document's ID
|
||||
|
||||
:returns: the document or ``None``
|
||||
"""
|
||||
|
||||
if doc_id is not None:
|
||||
# Retrieve a document specified by its ID
|
||||
table = self._read_table()
|
||||
raw_doc = table.get(str(doc_id), None)
|
||||
|
||||
if raw_doc is None:
|
||||
return None
|
||||
|
||||
# Convert the raw data to the document class
|
||||
return self.document_class(raw_doc, doc_id)
|
||||
|
||||
elif cond is not None:
|
||||
# Find a document specified by a query
|
||||
# The trailing underscore in doc_id_ is needed so MyPy
|
||||
# doesn't think that `doc_id_` (which is a string) needs
|
||||
# to have the same type as `doc_id` which is this function's
|
||||
# parameter and is an optional `int`.
|
||||
for doc_id_, doc in self._read_table().items():
|
||||
if cond(doc):
|
||||
return self.document_class(
|
||||
doc,
|
||||
self.document_id_class(doc_id_)
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
raise RuntimeError('You have to pass either cond or doc_id')
|
||||
|
||||
def contains(
|
||||
self,
|
||||
cond: Optional[QueryLike] = None,
|
||||
doc_id: Optional[int] = None
|
||||
) -> bool:
|
||||
"""
|
||||
Check whether the database contains a document matching a query or
|
||||
an ID.
|
||||
|
||||
If ``doc_id`` is set, it checks if the db contains the specified ID.
|
||||
|
||||
:param cond: the condition use
|
||||
:param doc_id: the document ID to look for
|
||||
"""
|
||||
if doc_id is not None:
|
||||
# Documents specified by ID
|
||||
return self.get(doc_id=doc_id) is not None
|
||||
|
||||
elif cond is not None:
|
||||
# Document specified by condition
|
||||
return self.get(cond) is not None
|
||||
|
||||
raise RuntimeError('You have to pass either cond or doc_id')
|
||||
|
||||
def update(
|
||||
self,
|
||||
fields: Union[Mapping, Callable[[Mapping], None]],
|
||||
cond: Optional[QueryLike] = None,
|
||||
doc_ids: Optional[Iterable[int]] = None,
|
||||
) -> List[int]:
|
||||
"""
|
||||
Update all matching documents to have a given set of fields.
|
||||
|
||||
:param fields: the fields that the matching documents will have
|
||||
or a method that will update the documents
|
||||
:param cond: which documents to update
|
||||
:param doc_ids: a list of document IDs
|
||||
:returns: a list containing the updated document's ID
|
||||
"""
|
||||
|
||||
# Define the function that will perform the update
|
||||
if callable(fields):
|
||||
def perform_update(table, doc_id):
|
||||
# Update documents by calling the update function provided by
|
||||
# the user
|
||||
fields(table[doc_id])
|
||||
else:
|
||||
def perform_update(table, doc_id):
|
||||
# Update documents by setting all fields from the provided data
|
||||
table[doc_id].update(fields)
|
||||
|
||||
if doc_ids is not None:
|
||||
# Perform the update operation for documents specified by a list
|
||||
# of document IDs
|
||||
|
||||
updated_ids = list(doc_ids)
|
||||
|
||||
def updater(table: dict):
|
||||
# Call the processing callback with all document IDs
|
||||
for doc_id in updated_ids:
|
||||
perform_update(table, doc_id)
|
||||
|
||||
# Perform the update operation (see _update_table for details)
|
||||
self._update_table(updater)
|
||||
|
||||
return updated_ids
|
||||
|
||||
elif cond is not None:
|
||||
# Perform the update operation for documents specified by a query
|
||||
|
||||
# Collect affected doc_ids
|
||||
updated_ids = []
|
||||
|
||||
def updater(table: dict):
|
||||
_cond = cast(QueryLike, cond)
|
||||
|
||||
# We need to convert the keys iterator to a list because
|
||||
# we may remove entries from the ``table`` dict during
|
||||
# iteration and doing this without the list conversion would
|
||||
# result in an exception (RuntimeError: dictionary changed size
|
||||
# during iteration)
|
||||
for doc_id in list(table.keys()):
|
||||
# Pass through all documents to find documents matching the
|
||||
# query. Call the processing callback with the document ID
|
||||
if _cond(table[doc_id]):
|
||||
# Add ID to list of updated documents
|
||||
updated_ids.append(doc_id)
|
||||
|
||||
# Perform the update (see above)
|
||||
perform_update(table, doc_id)
|
||||
|
||||
# Perform the update operation (see _update_table for details)
|
||||
self._update_table(updater)
|
||||
|
||||
return updated_ids
|
||||
|
||||
else:
|
||||
# Update all documents unconditionally
|
||||
|
||||
updated_ids = []
|
||||
|
||||
def updater(table: dict):
|
||||
# Process all documents
|
||||
for doc_id in list(table.keys()):
|
||||
# Add ID to list of updated documents
|
||||
updated_ids.append(doc_id)
|
||||
|
||||
# Perform the update (see above)
|
||||
perform_update(table, doc_id)
|
||||
|
||||
# Perform the update operation (see _update_table for details)
|
||||
self._update_table(updater)
|
||||
|
||||
return updated_ids
|
||||
|
||||
def update_multiple(
|
||||
self,
|
||||
updates: Iterable[
|
||||
Tuple[Union[Mapping, Callable[[Mapping], None]], QueryLike]
|
||||
],
|
||||
) -> List[int]:
|
||||
"""
|
||||
Update all matching documents to have a given set of fields.
|
||||
|
||||
:returns: a list containing the updated document's ID
|
||||
"""
|
||||
|
||||
# Define the function that will perform the update
|
||||
def perform_update(fields, table, doc_id):
|
||||
if callable(fields):
|
||||
# Update documents by calling the update function provided
|
||||
# by the user
|
||||
fields(table[doc_id])
|
||||
else:
|
||||
# Update documents by setting all fields from the provided
|
||||
# data
|
||||
table[doc_id].update(fields)
|
||||
|
||||
# Perform the update operation for documents specified by a query
|
||||
|
||||
# Collect affected doc_ids
|
||||
updated_ids = []
|
||||
|
||||
def updater(table: dict):
|
||||
# We need to convert the keys iterator to a list because
|
||||
# we may remove entries from the ``table`` dict during
|
||||
# iteration and doing this without the list conversion would
|
||||
# result in an exception (RuntimeError: dictionary changed size
|
||||
# during iteration)
|
||||
for doc_id in list(table.keys()):
|
||||
for fields, cond in updates:
|
||||
_cond = cast(QueryLike, cond)
|
||||
|
||||
# Pass through all documents to find documents matching the
|
||||
# query. Call the processing callback with the document ID
|
||||
if _cond(table[doc_id]):
|
||||
# Add ID to list of updated documents
|
||||
updated_ids.append(doc_id)
|
||||
|
||||
# Perform the update (see above)
|
||||
perform_update(fields, table, doc_id)
|
||||
|
||||
# Perform the update operation (see _update_table for details)
|
||||
self._update_table(updater)
|
||||
|
||||
return updated_ids
|
||||
|
||||
def upsert(self, document: Mapping, cond: Optional[QueryLike] = None) -> List[int]:
|
||||
"""
|
||||
Update documents, if they exist, insert them otherwise.
|
||||
|
||||
Note: This will update *all* documents matching the query. Document
|
||||
argument can be a tinydb.table.Document object if you want to specify a
|
||||
doc_id.
|
||||
|
||||
:param document: the document to insert or the fields to update
|
||||
:param cond: which document to look for, optional if you've passed a
|
||||
Document with a doc_id
|
||||
:returns: a list containing the updated documents' IDs
|
||||
"""
|
||||
|
||||
# Extract doc_id
|
||||
if isinstance(document, Document) and hasattr(document, 'doc_id'):
|
||||
doc_ids: Optional[List[int]] = [document.doc_id]
|
||||
else:
|
||||
doc_ids = None
|
||||
|
||||
# Make sure we can actually find a matching document
|
||||
if doc_ids is None and cond is None:
|
||||
raise ValueError("If you don't specify a search query, you must "
|
||||
"specify a doc_id. Hint: use a table.Document "
|
||||
"object.")
|
||||
|
||||
# Perform the update operation
|
||||
try:
|
||||
updated_docs: Optional[List[int]] = self.update(document, cond, doc_ids)
|
||||
except KeyError:
|
||||
# This happens when a doc_id is specified, but it's missing
|
||||
updated_docs = None
|
||||
|
||||
# If documents have been updated: return their IDs
|
||||
if updated_docs:
|
||||
return updated_docs
|
||||
|
||||
# There are no documents that match the specified query -> insert the
|
||||
# data as a new document
|
||||
return [self.insert(document)]
|
||||
|
||||
def remove(
|
||||
self,
|
||||
cond: Optional[QueryLike] = None,
|
||||
doc_ids: Optional[Iterable[int]] = None,
|
||||
) -> List[int]:
|
||||
"""
|
||||
Remove all matching documents.
|
||||
|
||||
:param cond: the condition to check against
|
||||
:param doc_ids: a list of document IDs
|
||||
:returns: a list containing the removed documents' ID
|
||||
"""
|
||||
if doc_ids is not None:
|
||||
# This function returns the list of IDs for the documents that have
|
||||
# been removed. When removing documents identified by a set of
|
||||
# document IDs, it's this list of document IDs we need to return
|
||||
# later.
|
||||
# We convert the document ID iterator into a list, so we can both
|
||||
# use the document IDs to remove the specified documents and
|
||||
# to return the list of affected document IDs
|
||||
removed_ids = list(doc_ids)
|
||||
|
||||
def updater(table: dict):
|
||||
for doc_id in removed_ids:
|
||||
table.pop(doc_id)
|
||||
|
||||
# Perform the remove operation
|
||||
self._update_table(updater)
|
||||
|
||||
return removed_ids
|
||||
|
||||
if cond is not None:
|
||||
removed_ids = []
|
||||
|
||||
# This updater function will be called with the table data
|
||||
# as its first argument. See ``Table._update`` for details on this
|
||||
# operation
|
||||
def updater(table: dict):
|
||||
# We need to convince MyPy (the static type checker) that
|
||||
# the ``cond is not None`` invariant still holds true when
|
||||
# the updater function is called
|
||||
_cond = cast(QueryLike, cond)
|
||||
|
||||
# We need to convert the keys iterator to a list because
|
||||
# we may remove entries from the ``table`` dict during
|
||||
# iteration and doing this without the list conversion would
|
||||
# result in an exception (RuntimeError: dictionary changed size
|
||||
# during iteration)
|
||||
for doc_id in list(table.keys()):
|
||||
if _cond(table[doc_id]):
|
||||
# Add document ID to list of removed document IDs
|
||||
removed_ids.append(doc_id)
|
||||
|
||||
# Remove document from the table
|
||||
table.pop(doc_id)
|
||||
|
||||
# Perform the remove operation
|
||||
self._update_table(updater)
|
||||
|
||||
return removed_ids
|
||||
|
||||
raise RuntimeError('Use truncate() to remove all documents')
|
||||
|
||||
def truncate(self) -> None:
|
||||
"""
|
||||
Truncate the table by removing all documents.
|
||||
"""
|
||||
|
||||
# Update the table by resetting all data
|
||||
self._update_table(lambda table: table.clear())
|
||||
|
||||
# Reset document ID counter
|
||||
self._next_id = None
|
||||
|
||||
def count(self, cond: QueryLike) -> int:
|
||||
"""
|
||||
Count the documents matching a query.
|
||||
|
||||
:param cond: the condition use
|
||||
"""
|
||||
|
||||
return len(self.search(cond))
|
||||
|
||||
def clear_cache(self) -> None:
|
||||
"""
|
||||
Clear the query cache.
|
||||
"""
|
||||
|
||||
self._query_cache.clear()
|
||||
|
||||
def __len__(self):
|
||||
"""
|
||||
Count the total number of documents in this table.
|
||||
"""
|
||||
|
||||
return len(self._read_table())
|
||||
|
||||
def __iter__(self) -> Iterator[Document]:
|
||||
"""
|
||||
Iterate over all documents stored in the table.
|
||||
|
||||
:returns: an iterator over all documents.
|
||||
"""
|
||||
|
||||
# Iterate all documents and their IDs
|
||||
for doc_id, doc in self._read_table().items():
|
||||
# Convert documents to the document class
|
||||
yield self.document_class(doc, self.document_id_class(doc_id))
|
||||
|
||||
def _get_next_id(self):
|
||||
"""
|
||||
Return the ID for a newly inserted document.
|
||||
"""
|
||||
|
||||
# If we already know the next ID
|
||||
if self._next_id is not None:
|
||||
next_id = self._next_id
|
||||
self._next_id = next_id + 1
|
||||
|
||||
return next_id
|
||||
|
||||
# Determine the next document ID by finding out the max ID value
|
||||
# of the current table documents
|
||||
|
||||
# Read the table documents
|
||||
table = self._read_table()
|
||||
|
||||
# If the table is empty, set the initial ID
|
||||
if not table:
|
||||
next_id = 1
|
||||
self._next_id = next_id + 1
|
||||
|
||||
return next_id
|
||||
|
||||
# Determine the next ID based on the maximum ID that's currently in use
|
||||
max_id = max(self.document_id_class(i) for i in table.keys())
|
||||
next_id = max_id + 1
|
||||
|
||||
# The next ID we will return AFTER this call needs to be larger than
|
||||
# the current next ID we calculated
|
||||
self._next_id = next_id + 1
|
||||
|
||||
return next_id
|
||||
|
||||
def _read_table(self) -> Dict[str, Mapping]:
|
||||
"""
|
||||
Read the table data from the underlying storage.
|
||||
|
||||
Documents and doc_ids are NOT yet transformed, as
|
||||
we may not want to convert *all* documents when returning
|
||||
only one document for example.
|
||||
"""
|
||||
|
||||
# Retrieve the tables from the storage
|
||||
tables = self._storage.read()
|
||||
|
||||
if tables is None:
|
||||
# The database is empty
|
||||
return {}
|
||||
|
||||
# Retrieve the current table's data
|
||||
try:
|
||||
table = tables[self.name]
|
||||
except KeyError:
|
||||
# The table does not exist yet, so it is empty
|
||||
return {}
|
||||
|
||||
return table
|
||||
|
||||
def _update_table(self, updater: Callable[[Dict[int, Mapping]], None]):
|
||||
"""
|
||||
Perform a table update operation.
|
||||
|
||||
The storage interface used by TinyDB only allows to read/write the
|
||||
complete database data, but not modifying only portions of it. Thus,
|
||||
to only update portions of the table data, we first perform a read
|
||||
operation, perform the update on the table data and then write
|
||||
the updated data back to the storage.
|
||||
|
||||
As a further optimization, we don't convert the documents into the
|
||||
document class, as the table data will *not* be returned to the user.
|
||||
"""
|
||||
|
||||
tables = self._storage.read()
|
||||
|
||||
if tables is None:
|
||||
# The database is empty
|
||||
tables = {}
|
||||
|
||||
try:
|
||||
raw_table = tables[self.name]
|
||||
except KeyError:
|
||||
# The table does not exist yet, so it is empty
|
||||
raw_table = {}
|
||||
|
||||
# Convert the document IDs to the document ID class.
|
||||
# This is required as the rest of TinyDB expects the document IDs
|
||||
# to be an instance of ``self.document_id_class`` but the storage
|
||||
# might convert dict keys to strings.
|
||||
table = {
|
||||
self.document_id_class(doc_id): doc
|
||||
for doc_id, doc in raw_table.items()
|
||||
}
|
||||
|
||||
# Perform the table update operation
|
||||
updater(table)
|
||||
|
||||
# Convert the document IDs back to strings.
|
||||
# This is required as some storages (most notably the JSON file format)
|
||||
# don't support IDs other than strings.
|
||||
tables[self.name] = {
|
||||
str(doc_id): doc
|
||||
for doc_id, doc in table.items()
|
||||
}
|
||||
|
||||
# Write the newly updated data back to the storage
|
||||
self._storage.write(tables)
|
||||
|
||||
# Clear the query cache, as the table contents have changed
|
||||
self.clear_cache()
|
159
.env/lib/python3.10/site-packages/tinydb/utils.py
Normal file
159
.env/lib/python3.10/site-packages/tinydb/utils.py
Normal file
@ -0,0 +1,159 @@
|
||||
"""
|
||||
Utility functions.
|
||||
"""
|
||||
|
||||
from collections import OrderedDict, abc
|
||||
from typing import List, Iterator, TypeVar, Generic, Union, Optional, Type, \
|
||||
TYPE_CHECKING
|
||||
|
||||
K = TypeVar('K')
|
||||
V = TypeVar('V')
|
||||
D = TypeVar('D')
|
||||
T = TypeVar('T')
|
||||
|
||||
__all__ = ('LRUCache', 'freeze', 'with_typehint')
|
||||
|
||||
|
||||
def with_typehint(baseclass: Type[T]):
|
||||
"""
|
||||
Add type hints from a specified class to a base class:
|
||||
|
||||
>>> class Foo(with_typehint(Bar)):
|
||||
... pass
|
||||
|
||||
This would add type hints from class ``Bar`` to class ``Foo``.
|
||||
|
||||
Note that while PyCharm and Pyright (for VS Code) understand this pattern,
|
||||
MyPy does not. For that reason TinyDB has a MyPy plugin in
|
||||
``mypy_plugin.py`` that adds support for this pattern.
|
||||
"""
|
||||
if TYPE_CHECKING:
|
||||
# In the case of type checking: pretend that the target class inherits
|
||||
# from the specified base class
|
||||
return baseclass
|
||||
|
||||
# Otherwise: just inherit from `object` like a regular Python class
|
||||
return object
|
||||
|
||||
|
||||
class LRUCache(abc.MutableMapping, Generic[K, V]):
|
||||
"""
|
||||
A least-recently used (LRU) cache with a fixed cache size.
|
||||
|
||||
This class acts as a dictionary but has a limited size. If the number of
|
||||
entries in the cache exceeds the cache size, the least-recently accessed
|
||||
entry will be discarded.
|
||||
|
||||
This is implemented using an ``OrderedDict``. On every access the accessed
|
||||
entry is moved to the front by re-inserting it into the ``OrderedDict``.
|
||||
When adding an entry and the cache size is exceeded, the last entry will
|
||||
be discarded.
|
||||
"""
|
||||
|
||||
def __init__(self, capacity=None) -> None:
|
||||
self.capacity = capacity
|
||||
self.cache: OrderedDict[K, V] = OrderedDict()
|
||||
|
||||
@property
|
||||
def lru(self) -> List[K]:
|
||||
return list(self.cache.keys())
|
||||
|
||||
@property
|
||||
def length(self) -> int:
|
||||
return len(self.cache)
|
||||
|
||||
def clear(self) -> None:
|
||||
self.cache.clear()
|
||||
|
||||
def __len__(self) -> int:
|
||||
return self.length
|
||||
|
||||
def __contains__(self, key: object) -> bool:
|
||||
return key in self.cache
|
||||
|
||||
def __setitem__(self, key: K, value: V) -> None:
|
||||
self.set(key, value)
|
||||
|
||||
def __delitem__(self, key: K) -> None:
|
||||
del self.cache[key]
|
||||
|
||||
def __getitem__(self, key) -> V:
|
||||
value = self.get(key)
|
||||
if value is None:
|
||||
raise KeyError(key)
|
||||
|
||||
return value
|
||||
|
||||
def __iter__(self) -> Iterator[K]:
|
||||
return iter(self.cache)
|
||||
|
||||
def get(self, key: K, default: Optional[D] = None) -> Optional[Union[V, D]]:
|
||||
value = self.cache.get(key)
|
||||
|
||||
if value is not None:
|
||||
self.cache.move_to_end(key, last=True)
|
||||
|
||||
return value
|
||||
|
||||
return default
|
||||
|
||||
def set(self, key: K, value: V):
|
||||
if self.cache.get(key):
|
||||
self.cache.move_to_end(key, last=True)
|
||||
|
||||
else:
|
||||
self.cache[key] = value
|
||||
|
||||
# Check, if the cache is full and we have to remove old items
|
||||
# If the queue is of unlimited size, self.capacity is NaN and
|
||||
# x > NaN is always False in Python and the cache won't be cleared.
|
||||
if self.capacity is not None and self.length > self.capacity:
|
||||
self.cache.popitem(last=False)
|
||||
|
||||
|
||||
class FrozenDict(dict):
|
||||
"""
|
||||
An immutable dictionary.
|
||||
|
||||
This is used to generate stable hashes for queries that contain dicts.
|
||||
Usually, Python dicts are not hashable because they are mutable. This
|
||||
class removes the mutability and implements the ``__hash__`` method.
|
||||
"""
|
||||
|
||||
def __hash__(self):
|
||||
# Calculate the has by hashing a tuple of all dict items
|
||||
return hash(tuple(sorted(self.items())))
|
||||
|
||||
def _immutable(self, *args, **kws):
|
||||
raise TypeError('object is immutable')
|
||||
|
||||
# Disable write access to the dict
|
||||
__setitem__ = _immutable
|
||||
__delitem__ = _immutable
|
||||
clear = _immutable
|
||||
setdefault = _immutable # type: ignore
|
||||
popitem = _immutable
|
||||
|
||||
def update(self, e=None, **f):
|
||||
raise TypeError('object is immutable')
|
||||
|
||||
def pop(self, k, d=None):
|
||||
raise TypeError('object is immutable')
|
||||
|
||||
|
||||
def freeze(obj):
|
||||
"""
|
||||
Freeze an object by making it immutable and thus hashable.
|
||||
"""
|
||||
if isinstance(obj, dict):
|
||||
# Transform dicts into ``FrozenDict``s
|
||||
return FrozenDict((k, freeze(v)) for k, v in obj.items())
|
||||
elif isinstance(obj, list):
|
||||
# Transform lists into tuples
|
||||
return tuple(freeze(el) for el in obj)
|
||||
elif isinstance(obj, set):
|
||||
# Transform sets into ``frozenset``s
|
||||
return frozenset(obj)
|
||||
else:
|
||||
# Don't handle all other objects
|
||||
return obj
|
1
.env/lib/python3.10/site-packages/tinydb/version.py
Normal file
1
.env/lib/python3.10/site-packages/tinydb/version.py
Normal file
@ -0,0 +1 @@
|
||||
__version__ = '4.7.1'
|
BIN
__pycache__/database.cpython-310.pyc
Normal file
BIN
__pycache__/database.cpython-310.pyc
Normal file
Binary file not shown.
BIN
__pycache__/utils.cpython-310.pyc
Normal file
BIN
__pycache__/utils.cpython-310.pyc
Normal file
Binary file not shown.
1
database.json
Normal file
1
database.json
Normal file
@ -0,0 +1 @@
|
||||
{"users": {"1": {"camille@chauvet.pro": "Fgzf4BY6R8oBfoz6VrHziwxjZiz4dB2cU7FcXP5kh"}}}
|
21
database.py
Normal file
21
database.py
Normal file
@ -0,0 +1,21 @@
|
||||
from tinydb import TinyDB, Query
|
||||
|
||||
db = TinyDB("./database.json")
|
||||
users = db.table("users");
|
||||
|
||||
def get_users():
|
||||
return (users.all())
|
||||
|
||||
def get_user_by_email(email: str):
|
||||
for user in get_users():
|
||||
if (list(user.keys())[0] == email):
|
||||
return (user);
|
||||
|
||||
def user_exist(email: str):
|
||||
return (get_user_by_email(email) != None)
|
||||
|
||||
def add_user(email: str, password: str):
|
||||
users.insert({email: password});
|
||||
|
||||
def check_password(email: str, password: str):
|
||||
return (get_user_by_email(email).get(email) == password)
|
BIN
flask_session/2029240f6d1128be89ddc32729463129
Normal file
BIN
flask_session/2029240f6d1128be89ddc32729463129
Normal file
Binary file not shown.
BIN
flask_session/320444fe8859cda10187fa078aaa3674
Normal file
BIN
flask_session/320444fe8859cda10187fa078aaa3674
Normal file
Binary file not shown.
26
hash.py
Normal file
26
hash.py
Normal file
@ -0,0 +1,26 @@
|
||||
import bcrypt
|
||||
|
||||
# Declaring our password
|
||||
password = b'GeekPassword'
|
||||
|
||||
# Adding the salt to password
|
||||
salt = bcrypt.gensalt()
|
||||
# Hashing the password
|
||||
hashed = bcrypt.hashpw(password, salt)
|
||||
|
||||
print(salt)
|
||||
print(type(hashed))
|
||||
|
||||
salt = hashed[:29]
|
||||
|
||||
print(salt)
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
||||
print(password == bcrypt.hashpw(password, salt))
|
69
main.py
69
main.py
@ -1,10 +1,73 @@
|
||||
from flask import Flask
|
||||
from flask import Flask, render_template, request, redirect, session
|
||||
from flask_session import Session
|
||||
import utils
|
||||
import database
|
||||
|
||||
app = Flask(__name__);
|
||||
app.config["SESSION_PERMANENT"] = False
|
||||
app.config["SESSION_TYPE"] = "filesystem"
|
||||
Session(app)
|
||||
|
||||
@app.route("/")
|
||||
def home():
|
||||
return ("test");
|
||||
return (render_template("home.html"));
|
||||
|
||||
|
||||
@app.route("/connected")
|
||||
def connected():
|
||||
if (not session.get("email")):
|
||||
return (redirect("/login"))
|
||||
return (render_template("connected.html"));
|
||||
|
||||
@app.route("/login")
|
||||
def login():
|
||||
return (render_template("login.html"));
|
||||
|
||||
@app.route('/login', methods=['POST'])
|
||||
def login_post():
|
||||
email = request.form.get('email')
|
||||
password = request.form.get('password')
|
||||
if (not database.user_exist(email)):
|
||||
return (render_template("login.html", error="Email ou mot de passe faux"))
|
||||
if (not database.check_password(email, password)):
|
||||
return (render_template("login.html", error="Email ou mot de passe faux"))
|
||||
session["email"] = email
|
||||
return (redirect("/connected"))
|
||||
|
||||
@app.route('/logout')
|
||||
def logout():
|
||||
session.pop('email', None)
|
||||
return redirect("/login")
|
||||
|
||||
@app.route("/signin")
|
||||
def signin():
|
||||
return (render_template("signin.html"));
|
||||
|
||||
@app.route('/signin', methods=['POST'])
|
||||
def signup_post():
|
||||
email = request.form.get('email')
|
||||
password = request.form.get('password')
|
||||
repassword = request.form.get('repassword')
|
||||
if (password != repassword):
|
||||
return (render_template("signin.html", error="Les deux mots de passe sont differents"))
|
||||
if (utils.check_email(email)):
|
||||
return (render_template("signin.html", error="Votre email n'est pas valdie"))
|
||||
if (database.user_exist(email)):
|
||||
return (render_template("signin.html", error="Email deja utilisé"))
|
||||
database.add_user(email, password);
|
||||
return (redirect("/connected"))
|
||||
|
||||
@app.route("/forgot")
|
||||
def forgot():
|
||||
return (render_template("forgot.html"));
|
||||
|
||||
@app.route("/reset/<uuid>")
|
||||
def reset(uuid):
|
||||
return ("bozo")
|
||||
|
||||
@app.route("/join/<uuid>")
|
||||
def join(uuid):
|
||||
return ("bozo")
|
||||
|
||||
if __name__ == '__main__':
|
||||
app.run(host='0.0.0.0', port=5000)
|
||||
app.run(host='0.0.0.0', port=5000, debug=1)
|
||||
|
14
templates/home.html
Normal file
14
templates/home.html
Normal file
@ -0,0 +1,14 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>PyMenu</title>
|
||||
</head>
|
||||
<body>
|
||||
<a href="/login">
|
||||
<input type="button" value="Espace membre" />
|
||||
</a>
|
||||
<h1>Hello World!</h1>
|
||||
<h2>Welcome to FlaskApp!</h2>
|
||||
</body>
|
||||
</html>
|
89
templates/login.html
Normal file
89
templates/login.html
Normal file
@ -0,0 +1,89 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="fr">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<link rel="stylesheet" href="index.css" />
|
||||
<title>Beyond School</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="container">
|
||||
<form method="post">
|
||||
<h2>Connection</h2>
|
||||
<label><b>Email :</b></label>
|
||||
<input type="email" name="email" placeholder="Insérez votre email" required>
|
||||
<br>
|
||||
<label><b>Mot de passe :</b></label>
|
||||
<input type="password" name="password" placeholder="Insérez votre mot de passe" required>
|
||||
<br>
|
||||
<a href="/forgot">
|
||||
<h6>oublié ?</h6>
|
||||
</a>
|
||||
{% if error %}
|
||||
<p>{{error}}</p>
|
||||
{% endif %}
|
||||
<input type="submit" value="Connexion">
|
||||
<a href="/signin">
|
||||
<input type="button" value="Inscription">
|
||||
</a>
|
||||
</form>
|
||||
</div>
|
||||
<style>
|
||||
body{
|
||||
background: #67BE4B;
|
||||
}
|
||||
#container{
|
||||
width:400px;
|
||||
margin:0 auto;
|
||||
margin-top:10%;
|
||||
}
|
||||
/* Bordered form */
|
||||
form {
|
||||
width:100%;
|
||||
padding: 40px;
|
||||
border: 1px solid #f1f1f1;
|
||||
background: #fff;
|
||||
box-shadow: 0 0 20px 0 rgba(0, 0, 0, 0.2), 0 5px 5px 0 rgba(0, 0, 0, 0.24);
|
||||
}
|
||||
#container h2{
|
||||
width: 38%;
|
||||
margin: 0 auto;
|
||||
padding-bottom: 10px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* Full-width inputs */
|
||||
input[type=email], input[type=password], input[type=number] {
|
||||
width: 100%;
|
||||
padding: 12px 20px;
|
||||
margin: 8px 0;
|
||||
display: inline-block;
|
||||
border: 1px solid #ccc;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
/* Set a style for all buttons */
|
||||
input[type=submit], input[type=button] {
|
||||
background-color: #53af57;
|
||||
color: white;
|
||||
padding: 14px 20px;
|
||||
margin: 8px 0;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
width: 100%;
|
||||
}
|
||||
input[type=submit]:hover {
|
||||
background-color: white;
|
||||
color: #53af57;
|
||||
border: 1px solid #53af57;
|
||||
}
|
||||
h6 {
|
||||
float: right;
|
||||
}
|
||||
p {
|
||||
color: red;
|
||||
}
|
||||
</style>
|
||||
</body>
|
||||
</html>
|
89
templates/signin.html
Normal file
89
templates/signin.html
Normal file
@ -0,0 +1,89 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="fr">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<link rel="stylesheet" href="index.css" />
|
||||
<title>Beyond School</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="container">
|
||||
<form name="signin" method="post">
|
||||
<h2>Connection</h2>
|
||||
<label><b>Email :</b></label>
|
||||
<input type="email" name="email" placeholder="Insérez votre email" required>
|
||||
<br>
|
||||
<label><b>Mot de passe :</b></label>
|
||||
<input type="password" name="password" placeholder="Insérez votre mot de passe" required>
|
||||
<br>
|
||||
<label><b>Mot de passe :</b></label>
|
||||
<input type="password" name="repassword" placeholder="Insérez votre mot de passe" required>
|
||||
{% if error %}
|
||||
<p class="error">{{error}}</p>
|
||||
{% endif %}
|
||||
<br>
|
||||
<input type="submit" value="Inscription">
|
||||
<a href="/login">
|
||||
<input type="button" value="Connection">
|
||||
</a>
|
||||
</form>
|
||||
</div>
|
||||
<style>
|
||||
body{
|
||||
background: #67BE4B;
|
||||
}
|
||||
#container{
|
||||
width:400px;
|
||||
margin:0 auto;
|
||||
margin-top:10%;
|
||||
}
|
||||
/* Bordered form */
|
||||
form {
|
||||
width:100%;
|
||||
padding: 40px;
|
||||
border: 1px solid #f1f1f1;
|
||||
background: #fff;
|
||||
box-shadow: 0 0 20px 0 rgba(0, 0, 0, 0.2), 0 5px 5px 0 rgba(0, 0, 0, 0.24);
|
||||
}
|
||||
#container h2{
|
||||
width: 38%;
|
||||
margin: 0 auto;
|
||||
padding-bottom: 10px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* Full-width inputs */
|
||||
input[type=email], input[type=password], input[type=number] {
|
||||
width: 100%;
|
||||
padding: 12px 20px;
|
||||
margin: 8px 0;
|
||||
display: inline-block;
|
||||
border: 1px solid #ccc;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
/* Set a style for all buttons */
|
||||
input[type=submit], input[type=button] {
|
||||
background-color: #53af57;
|
||||
color: white;
|
||||
padding: 14px 20px;
|
||||
margin: 8px 0;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
width: 100%;
|
||||
}
|
||||
input[type=submit]:hover {
|
||||
background-color: white;
|
||||
color: #53af57;
|
||||
border: 1px solid #53af57;
|
||||
}
|
||||
h6 {
|
||||
float: right;
|
||||
}
|
||||
p {
|
||||
color: red
|
||||
}
|
||||
</style>
|
||||
</body>
|
||||
</html>
|
Loading…
Reference in New Issue
Block a user