Merge branch 'master' into ultradnsPlugin
This commit is contained in:
commit
6bf920e66c
4
Makefile
4
Makefile
|
@ -36,7 +36,7 @@ endif
|
||||||
@echo ""
|
@echo ""
|
||||||
|
|
||||||
dev-docs:
|
dev-docs:
|
||||||
pip install -r docs/requirements.txt
|
pip install -r requirements-docs.txt
|
||||||
|
|
||||||
reset-db:
|
reset-db:
|
||||||
@echo "--> Dropping existing 'lemur' database"
|
@echo "--> Dropping existing 'lemur' database"
|
||||||
|
@ -46,7 +46,7 @@ reset-db:
|
||||||
@echo "--> Enabling pg_trgm extension"
|
@echo "--> Enabling pg_trgm extension"
|
||||||
psql lemur -c "create extension IF NOT EXISTS pg_trgm;"
|
psql lemur -c "create extension IF NOT EXISTS pg_trgm;"
|
||||||
@echo "--> Applying migrations"
|
@echo "--> Applying migrations"
|
||||||
lemur db upgrade
|
cd lemur && lemur db upgrade
|
||||||
|
|
||||||
setup-git:
|
setup-git:
|
||||||
@echo "--> Installing git hooks"
|
@echo "--> Installing git hooks"
|
||||||
|
|
|
@ -593,8 +593,60 @@ If you are not using a metric provider you do not need to configure any of these
|
||||||
Plugin Specific Options
|
Plugin Specific Options
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
|
Active Directory Certificate Services Plugin
|
||||||
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
|
||||||
|
.. data:: ADCS_SERVER
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
FQDN of your ADCS Server
|
||||||
|
|
||||||
|
|
||||||
|
.. data:: ADCS_AUTH_METHOD
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
The chosen authentication method. Either ‘basic’ (the default), ‘ntlm’ or ‘cert’ (SSL client certificate). The next 2 variables are interpreted differently for different methods.
|
||||||
|
|
||||||
|
|
||||||
|
.. data:: ADCS_USER
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
The username (basic) or the path to the public cert (cert) of the user accessing PKI
|
||||||
|
|
||||||
|
|
||||||
|
.. data:: ADCS_PWD
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
The passwd (basic) or the path to the private key (cert) of the user accessing PKI
|
||||||
|
|
||||||
|
|
||||||
|
.. data:: ADCS_TEMPLATE
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
Template to be used for certificate issuing. Usually display name w/o spaces
|
||||||
|
|
||||||
|
|
||||||
|
.. data:: ADCS_START
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
.. data:: ADCS_STOP
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
.. data:: ADCS_ISSUING
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
Contains the issuing cert of the CA
|
||||||
|
|
||||||
|
|
||||||
|
.. data:: ADCS_ROOT
|
||||||
|
:noindex:
|
||||||
|
|
||||||
|
Contains the root cert of the CA
|
||||||
|
|
||||||
|
|
||||||
Verisign Issuer Plugin
|
Verisign Issuer Plugin
|
||||||
^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Authorities will each have their own configuration options. There is currently just one plugin bundled with Lemur,
|
Authorities will each have their own configuration options. There is currently just one plugin bundled with Lemur,
|
||||||
Verisign/Symantec. Additional plugins may define additional options. Refer to the plugin's own documentation
|
Verisign/Symantec. Additional plugins may define additional options. Refer to the plugin's own documentation
|
||||||
|
@ -642,7 +694,7 @@ for those plugins.
|
||||||
|
|
||||||
|
|
||||||
Digicert Issuer Plugin
|
Digicert Issuer Plugin
|
||||||
^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The following configuration properties are required to use the Digicert issuer plugin.
|
The following configuration properties are required to use the Digicert issuer plugin.
|
||||||
|
|
||||||
|
@ -690,7 +742,7 @@ The following configuration properties are required to use the Digicert issuer p
|
||||||
|
|
||||||
|
|
||||||
CFSSL Issuer Plugin
|
CFSSL Issuer Plugin
|
||||||
^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
The following configuration properties are required to use the CFSSL issuer plugin.
|
The following configuration properties are required to use the CFSSL issuer plugin.
|
||||||
|
|
||||||
|
@ -716,7 +768,7 @@ The following configuration properties are required to use the CFSSL issuer plug
|
||||||
|
|
||||||
|
|
||||||
Hashicorp Vault Source/Destination Plugin
|
Hashicorp Vault Source/Destination Plugin
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Lemur can import and export certificate data to and from a Hashicorp Vault secrets store. Lemur can connect to a different Vault service per source/destination.
|
Lemur can import and export certificate data to and from a Hashicorp Vault secrets store. Lemur can connect to a different Vault service per source/destination.
|
||||||
|
|
||||||
|
@ -738,7 +790,7 @@ Vault Destination supports a regex filter to prevent certificates with SAN that
|
||||||
|
|
||||||
|
|
||||||
AWS Source/Destination Plugin
|
AWS Source/Destination Plugin
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
In order for Lemur to manage its own account and other accounts we must ensure it has the correct AWS permissions.
|
In order for Lemur to manage its own account and other accounts we must ensure it has the correct AWS permissions.
|
||||||
|
|
||||||
|
@ -1090,7 +1142,9 @@ Verisign/Symantec
|
||||||
-----------------
|
-----------------
|
||||||
|
|
||||||
:Authors:
|
:Authors:
|
||||||
Kevin Glisson <kglisson@netflix.com>
|
Kevin Glisson <kglisson@netflix.com>,
|
||||||
|
Curtis Castrapel <ccastrapel@netflix.com>,
|
||||||
|
Hossein Shafagh <hshafagh@netflix.com>
|
||||||
:Type:
|
:Type:
|
||||||
Issuer
|
Issuer
|
||||||
:Description:
|
:Description:
|
||||||
|
@ -1116,6 +1170,8 @@ Acme
|
||||||
|
|
||||||
:Authors:
|
:Authors:
|
||||||
Kevin Glisson <kglisson@netflix.com>,
|
Kevin Glisson <kglisson@netflix.com>,
|
||||||
|
Curtis Castrapel <ccastrapel@netflix.com>,
|
||||||
|
Hossein Shafagh <hshafagh@netflix.com>,
|
||||||
Mikhail Khodorovskiy <mikhail.khodorovskiy@jivesoftware.com>
|
Mikhail Khodorovskiy <mikhail.khodorovskiy@jivesoftware.com>
|
||||||
:Type:
|
:Type:
|
||||||
Issuer
|
Issuer
|
||||||
|
@ -1127,7 +1183,9 @@ Atlas
|
||||||
-----
|
-----
|
||||||
|
|
||||||
:Authors:
|
:Authors:
|
||||||
Kevin Glisson <kglisson@netflix.com>
|
Kevin Glisson <kglisson@netflix.com>,
|
||||||
|
Curtis Castrapel <ccastrapel@netflix.com>,
|
||||||
|
Hossein Shafagh <hshafagh@netflix.com>
|
||||||
:Type:
|
:Type:
|
||||||
Metric
|
Metric
|
||||||
:Description:
|
:Description:
|
||||||
|
@ -1138,7 +1196,9 @@ Email
|
||||||
-----
|
-----
|
||||||
|
|
||||||
:Authors:
|
:Authors:
|
||||||
Kevin Glisson <kglisson@netflix.com>
|
Kevin Glisson <kglisson@netflix.com>,
|
||||||
|
Curtis Castrapel <ccastrapel@netflix.com>,
|
||||||
|
Hossein Shafagh <hshafagh@netflix.com>
|
||||||
:Type:
|
:Type:
|
||||||
Notification
|
Notification
|
||||||
:Description:
|
:Description:
|
||||||
|
@ -1160,7 +1220,9 @@ AWS
|
||||||
----
|
----
|
||||||
|
|
||||||
:Authors:
|
:Authors:
|
||||||
Kevin Glisson <kglisson@netflix.com>
|
Kevin Glisson <kglisson@netflix.com>,
|
||||||
|
Curtis Castrapel <ccastrapel@netflix.com>,
|
||||||
|
Hossein Shafagh <hshafagh@netflix.com>
|
||||||
:Type:
|
:Type:
|
||||||
Source
|
Source
|
||||||
:Description:
|
:Description:
|
||||||
|
@ -1171,7 +1233,9 @@ AWS
|
||||||
----
|
----
|
||||||
|
|
||||||
:Authors:
|
:Authors:
|
||||||
Kevin Glisson <kglisson@netflix.com>
|
Kevin Glisson <kglisson@netflix.com>,
|
||||||
|
Curtis Castrapel <ccastrapel@netflix.com>,
|
||||||
|
Hossein Shafagh <hshafagh@netflix.com>
|
||||||
:Type:
|
:Type:
|
||||||
Destination
|
Destination
|
||||||
:Description:
|
:Description:
|
||||||
|
|
58
docs/conf.py
58
docs/conf.py
|
@ -18,17 +18,18 @@ from unittest.mock import MagicMock
|
||||||
# If extensions (or modules to document with autodoc) are in another directory,
|
# If extensions (or modules to document with autodoc) are in another directory,
|
||||||
# add these directories to sys.path here. If the directory is relative to the
|
# add these directories to sys.path here. If the directory is relative to the
|
||||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||||
sys.path.insert(0, os.path.abspath('..'))
|
sys.path.insert(0, os.path.abspath(".."))
|
||||||
|
|
||||||
# Mock packages that cannot be installed on rtd
|
# Mock packages that cannot be installed on rtd
|
||||||
on_rtd = os.environ.get('READTHEDOCS') == 'True'
|
on_rtd = os.environ.get("READTHEDOCS") == "True"
|
||||||
if on_rtd:
|
if on_rtd:
|
||||||
|
|
||||||
class Mock(MagicMock):
|
class Mock(MagicMock):
|
||||||
@classmethod
|
@classmethod
|
||||||
def __getattr__(cls, name):
|
def __getattr__(cls, name):
|
||||||
return MagicMock()
|
return MagicMock()
|
||||||
|
|
||||||
MOCK_MODULES = ['ldap']
|
MOCK_MODULES = ["ldap"]
|
||||||
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
|
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
|
||||||
|
|
||||||
# -- General configuration ------------------------------------------------
|
# -- General configuration ------------------------------------------------
|
||||||
|
@ -39,27 +40,23 @@ if on_rtd:
|
||||||
# Add any Sphinx extension module names here, as strings. They can be
|
# Add any Sphinx extension module names here, as strings. They can be
|
||||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||||
# ones.
|
# ones.
|
||||||
extensions = [
|
extensions = ["sphinx.ext.autodoc", "sphinxcontrib.autohttp.flask", "sphinx.ext.todo"]
|
||||||
'sphinx.ext.autodoc',
|
|
||||||
'sphinxcontrib.autohttp.flask',
|
|
||||||
'sphinx.ext.todo',
|
|
||||||
]
|
|
||||||
|
|
||||||
# Add any paths that contain templates here, relative to this directory.
|
# Add any paths that contain templates here, relative to this directory.
|
||||||
templates_path = ['_templates']
|
templates_path = ["_templates"]
|
||||||
|
|
||||||
# The suffix of source filenames.
|
# The suffix of source filenames.
|
||||||
source_suffix = '.rst'
|
source_suffix = ".rst"
|
||||||
|
|
||||||
# The encoding of source files.
|
# The encoding of source files.
|
||||||
# source_encoding = 'utf-8-sig'
|
# source_encoding = 'utf-8-sig'
|
||||||
|
|
||||||
# The master toctree document.
|
# The master toctree document.
|
||||||
master_doc = 'index'
|
master_doc = "index"
|
||||||
|
|
||||||
# General information about the project.
|
# General information about the project.
|
||||||
project = u'lemur'
|
project = u"lemur"
|
||||||
copyright = u'2018, Netflix Inc.'
|
copyright = u"2018, Netflix Inc."
|
||||||
|
|
||||||
# The version info for the project you're documenting, acts as replacement for
|
# The version info for the project you're documenting, acts as replacement for
|
||||||
# |version| and |release|, also used in various other places throughout the
|
# |version| and |release|, also used in various other places throughout the
|
||||||
|
@ -84,7 +81,7 @@ version = release = about["__version__"]
|
||||||
|
|
||||||
# List of patterns, relative to source directory, that match files and
|
# List of patterns, relative to source directory, that match files and
|
||||||
# directories to ignore when looking for source files.
|
# directories to ignore when looking for source files.
|
||||||
exclude_patterns = ['_build']
|
exclude_patterns = ["_build"]
|
||||||
|
|
||||||
# The reST default role (used for this markup: `text`) to use for all
|
# The reST default role (used for this markup: `text`) to use for all
|
||||||
# documents.
|
# documents.
|
||||||
|
@ -102,7 +99,7 @@ exclude_patterns = ['_build']
|
||||||
# show_authors = False
|
# show_authors = False
|
||||||
|
|
||||||
# The name of the Pygments (syntax highlighting) style to use.
|
# The name of the Pygments (syntax highlighting) style to use.
|
||||||
pygments_style = 'sphinx'
|
pygments_style = "sphinx"
|
||||||
|
|
||||||
# A list of ignored prefixes for module index sorting.
|
# A list of ignored prefixes for module index sorting.
|
||||||
# modindex_common_prefix = []
|
# modindex_common_prefix = []
|
||||||
|
@ -114,11 +111,12 @@ pygments_style = 'sphinx'
|
||||||
# -- Options for HTML output ----------------------------------------------
|
# -- Options for HTML output ----------------------------------------------
|
||||||
|
|
||||||
# on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
|
# on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org
|
||||||
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
|
on_rtd = os.environ.get("READTHEDOCS", None) == "True"
|
||||||
|
|
||||||
if not on_rtd: # only import and set the theme if we're building docs locally
|
if not on_rtd: # only import and set the theme if we're building docs locally
|
||||||
import sphinx_rtd_theme
|
import sphinx_rtd_theme
|
||||||
html_theme = 'sphinx_rtd_theme'
|
|
||||||
|
html_theme = "sphinx_rtd_theme"
|
||||||
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
|
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
|
||||||
|
|
||||||
# Theme options are theme-specific and customize the look and feel of a theme
|
# Theme options are theme-specific and customize the look and feel of a theme
|
||||||
|
@ -148,7 +146,7 @@ if not on_rtd: # only import and set the theme if we're building docs locally
|
||||||
# Add any paths that contain custom static files (such as style sheets) here,
|
# Add any paths that contain custom static files (such as style sheets) here,
|
||||||
# relative to this directory. They are copied after the builtin static files,
|
# relative to this directory. They are copied after the builtin static files,
|
||||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||||
html_static_path = ['_static']
|
html_static_path = ["_static"]
|
||||||
|
|
||||||
# Add any extra paths that contain custom files (such as robots.txt or
|
# Add any extra paths that contain custom files (such as robots.txt or
|
||||||
# .htaccess) here, relative to this directory. These files are copied
|
# .htaccess) here, relative to this directory. These files are copied
|
||||||
|
@ -197,7 +195,7 @@ html_static_path = ['_static']
|
||||||
# html_file_suffix = None
|
# html_file_suffix = None
|
||||||
|
|
||||||
# Output file base name for HTML help builder.
|
# Output file base name for HTML help builder.
|
||||||
htmlhelp_basename = 'lemurdoc'
|
htmlhelp_basename = "lemurdoc"
|
||||||
|
|
||||||
|
|
||||||
# -- Options for LaTeX output ---------------------------------------------
|
# -- Options for LaTeX output ---------------------------------------------
|
||||||
|
@ -205,10 +203,8 @@ htmlhelp_basename = 'lemurdoc'
|
||||||
latex_elements = {
|
latex_elements = {
|
||||||
# The paper size ('letterpaper' or 'a4paper').
|
# The paper size ('letterpaper' or 'a4paper').
|
||||||
#'papersize': 'letterpaper',
|
#'papersize': 'letterpaper',
|
||||||
|
|
||||||
# The font size ('10pt', '11pt' or '12pt').
|
# The font size ('10pt', '11pt' or '12pt').
|
||||||
#'pointsize': '10pt',
|
#'pointsize': '10pt',
|
||||||
|
|
||||||
# Additional stuff for the LaTeX preamble.
|
# Additional stuff for the LaTeX preamble.
|
||||||
#'preamble': '',
|
#'preamble': '',
|
||||||
}
|
}
|
||||||
|
@ -217,8 +213,7 @@ latex_elements = {
|
||||||
# (source start file, target name, title,
|
# (source start file, target name, title,
|
||||||
# author, documentclass [howto, manual, or own class]).
|
# author, documentclass [howto, manual, or own class]).
|
||||||
latex_documents = [
|
latex_documents = [
|
||||||
('index', 'lemur.tex', u'Lemur Documentation',
|
("index", "lemur.tex", u"Lemur Documentation", u"Netflix Security", "manual")
|
||||||
u'Kevin Glisson', 'manual'),
|
|
||||||
]
|
]
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to place at the top of
|
# The name of an image file (relative to this directory) to place at the top of
|
||||||
|
@ -246,10 +241,7 @@ latex_documents = [
|
||||||
|
|
||||||
# One entry per manual page. List of tuples
|
# One entry per manual page. List of tuples
|
||||||
# (source start file, name, description, authors, manual section).
|
# (source start file, name, description, authors, manual section).
|
||||||
man_pages = [
|
man_pages = [("index", "Lemur", u"Lemur Documentation", [u"Netflix Security"], 1)]
|
||||||
('index', 'Lemur', u'Lemur Documentation',
|
|
||||||
[u'Kevin Glisson'], 1)
|
|
||||||
]
|
|
||||||
|
|
||||||
# If true, show URL addresses after external links.
|
# If true, show URL addresses after external links.
|
||||||
# man_show_urls = False
|
# man_show_urls = False
|
||||||
|
@ -261,9 +253,15 @@ man_pages = [
|
||||||
# (source start file, target name, title, author,
|
# (source start file, target name, title, author,
|
||||||
# dir menu entry, description, category)
|
# dir menu entry, description, category)
|
||||||
texinfo_documents = [
|
texinfo_documents = [
|
||||||
('index', 'Lemur', u'Lemur Documentation',
|
(
|
||||||
u'Kevin Glisson', 'Lemur', 'SSL Certificate Management',
|
"index",
|
||||||
'Miscellaneous'),
|
"Lemur",
|
||||||
|
u"Lemur Documentation",
|
||||||
|
u"Netflix Security",
|
||||||
|
"Lemur",
|
||||||
|
"SSL Certificate Management",
|
||||||
|
"Miscellaneous",
|
||||||
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
# Documents to append as an appendix to all manuals.
|
# Documents to append as an appendix to all manuals.
|
||||||
|
|
|
@ -22,12 +22,18 @@ Once you've got all that, the rest is simple:
|
||||||
# If you have a fork, you'll want to clone it instead
|
# If you have a fork, you'll want to clone it instead
|
||||||
git clone git://github.com/netflix/lemur.git
|
git clone git://github.com/netflix/lemur.git
|
||||||
|
|
||||||
# Create a python virtualenv
|
# Create and activate python virtualenv from within the lemur repo
|
||||||
mkvirtualenv lemur
|
python3 -m venv env
|
||||||
|
. env/bin/activate
|
||||||
|
|
||||||
|
# Install doc requirements
|
||||||
|
|
||||||
# Make the magic happen
|
|
||||||
make dev-docs
|
make dev-docs
|
||||||
|
|
||||||
|
# Make the docs
|
||||||
|
cd docs
|
||||||
|
make html
|
||||||
|
|
||||||
Running ``make dev-docs`` will install the basic requirements to get Sphinx running.
|
Running ``make dev-docs`` will install the basic requirements to get Sphinx running.
|
||||||
|
|
||||||
|
|
||||||
|
@ -58,7 +64,7 @@ Once you've got all that, the rest is simple:
|
||||||
git clone git://github.com/lemur/lemur.git
|
git clone git://github.com/lemur/lemur.git
|
||||||
|
|
||||||
# Create a python virtualenv
|
# Create a python virtualenv
|
||||||
mkvirtualenv lemur
|
python3 -m venv env
|
||||||
|
|
||||||
# Make the magic happen
|
# Make the magic happen
|
||||||
make
|
make
|
||||||
|
@ -135,7 +141,7 @@ The test suite consists of multiple parts, testing both the Python and JavaScrip
|
||||||
|
|
||||||
make test
|
make test
|
||||||
|
|
||||||
If you only need to run the Python tests, you can do so with ``make test-python``, as well as ``test-js`` for the JavaScript tests.
|
If you only need to run the Python tests, you can do so with ``make test-python``, as well as ``make test-js`` for the JavaScript tests.
|
||||||
|
|
||||||
|
|
||||||
You'll notice that the test suite is structured based on where the code lives, and strongly encourages using the mock library to drive more accurate individual tests.
|
You'll notice that the test suite is structured based on where the code lives, and strongly encourages using the mock library to drive more accurate individual tests.
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 86 KiB |
|
@ -318,7 +318,7 @@ Periodic Tasks
|
||||||
==============
|
==============
|
||||||
|
|
||||||
Lemur contains a few tasks that are run and scheduled basis, currently the recommend way to run these tasks is to create
|
Lemur contains a few tasks that are run and scheduled basis, currently the recommend way to run these tasks is to create
|
||||||
a cron job that runs the commands.
|
celery tasks or cron jobs that run these commands.
|
||||||
|
|
||||||
There are currently three commands that could/should be run on a periodic basis:
|
There are currently three commands that could/should be run on a periodic basis:
|
||||||
|
|
||||||
|
@ -326,11 +326,124 @@ There are currently three commands that could/should be run on a periodic basis:
|
||||||
- `check_revoked`
|
- `check_revoked`
|
||||||
- `sync`
|
- `sync`
|
||||||
|
|
||||||
|
If you are using LetsEncrypt, you must also run the following:
|
||||||
|
|
||||||
|
- `fetch_all_pending_acme_certs`
|
||||||
|
- `remove_old_acme_certs`
|
||||||
|
|
||||||
How often you run these commands is largely up to the user. `notify` and `check_revoked` are typically run at least once a day.
|
How often you run these commands is largely up to the user. `notify` and `check_revoked` are typically run at least once a day.
|
||||||
`sync` is typically run every 15 minutes.
|
`sync` is typically run every 15 minutes. `fetch_all_pending_acme_certs` should be ran frequently (Every minute is fine).
|
||||||
|
`remove_old_acme_certs` can be ran more rarely, such as once every week.
|
||||||
|
|
||||||
Example cron entries::
|
Example cron entries::
|
||||||
|
|
||||||
0 22 * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur notify expirations
|
0 22 * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur notify expirations
|
||||||
*/15 * * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur source sync -s all
|
*/15 * * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur source sync -s all
|
||||||
0 22 * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur certificate check_revoked
|
0 22 * * * lemuruser export LEMUR_CONF=/Users/me/.lemur/lemur.conf.py; /www/lemur/bin/lemur certificate check_revoked
|
||||||
|
|
||||||
|
|
||||||
|
Example Celery configuration (To be placed in your configuration file)::
|
||||||
|
|
||||||
|
CELERYBEAT_SCHEDULE = {
|
||||||
|
'fetch_all_pending_acme_certs': {
|
||||||
|
'task': 'lemur.common.celery.fetch_all_pending_acme_certs',
|
||||||
|
'options': {
|
||||||
|
'expires': 180
|
||||||
|
},
|
||||||
|
'schedule': crontab(minute="*"),
|
||||||
|
},
|
||||||
|
'remove_old_acme_certs': {
|
||||||
|
'task': 'lemur.common.celery.remove_old_acme_certs',
|
||||||
|
'options': {
|
||||||
|
'expires': 180
|
||||||
|
},
|
||||||
|
'schedule': crontab(hour=7, minute=30, day_of_week=1),
|
||||||
|
},
|
||||||
|
'clean_all_sources': {
|
||||||
|
'task': 'lemur.common.celery.clean_all_sources',
|
||||||
|
'options': {
|
||||||
|
'expires': 180
|
||||||
|
},
|
||||||
|
'schedule': crontab(hour=1, minute=0, day_of_week=1),
|
||||||
|
},
|
||||||
|
'sync_all_sources': {
|
||||||
|
'task': 'lemur.common.celery.sync_all_sources',
|
||||||
|
'options': {
|
||||||
|
'expires': 180
|
||||||
|
},
|
||||||
|
'schedule': crontab(hour="*/3", minute=5),
|
||||||
|
},
|
||||||
|
'sync_source_destination': {
|
||||||
|
'task': 'lemur.common.celery.sync_source_destination',
|
||||||
|
'options': {
|
||||||
|
'expires': 180
|
||||||
|
},
|
||||||
|
'schedule': crontab(hour="*"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
To enable celery support, you must also have configuration values that tell Celery which broker and backend to use.
|
||||||
|
Here are the Celery configuration variables that should be set::
|
||||||
|
|
||||||
|
CELERY_RESULT_BACKEND = 'redis://your_redis_url:6379'
|
||||||
|
CELERY_BROKER_URL = 'redis://your_redis_url:6379'
|
||||||
|
CELERY_IMPORTS = ('lemur.common.celery')
|
||||||
|
CELERY_TIMEZONE = 'UTC'
|
||||||
|
|
||||||
|
You must start a single Celery scheduler instance and one or more worker instances in order to handle incoming tasks.
|
||||||
|
The scheduler can be started with::
|
||||||
|
|
||||||
|
LEMUR_CONF='/location/to/conf.py' /location/to/lemur/bin/celery -A lemur.common.celery beat
|
||||||
|
|
||||||
|
And the worker can be started with desired options such as the following::
|
||||||
|
|
||||||
|
LEMUR_CONF='/location/to/conf.py' /location/to/lemur/bin/celery -A lemur.common.celery worker --concurrency 10 -E -n lemurworker1@%%h
|
||||||
|
|
||||||
|
supervisor or systemd configurations should be created for these in production environments as appropriate.
|
||||||
|
|
||||||
|
Add support for LetsEncrypt
|
||||||
|
===========================
|
||||||
|
|
||||||
|
LetsEncrypt is a free, limited-feature certificate authority that offers publicly trusted certificates that are valid
|
||||||
|
for 90 days. LetsEncrypt does not use organizational validation (OV), and instead relies on domain validation (DV).
|
||||||
|
LetsEncrypt requires that we prove ownership of a domain before we're able to issue a certificate for that domain, each
|
||||||
|
time we want a certificate.
|
||||||
|
|
||||||
|
The most common methods to prove ownership are HTTP validation and DNS validation. Lemur supports DNS validation
|
||||||
|
through the creation of DNS TXT records.
|
||||||
|
|
||||||
|
In a nutshell, when we send a certificate request to LetsEncrypt, they generate a random token and ask us to put that
|
||||||
|
token in a DNS text record to prove ownership of a domain. If a certificate request has multiple domains, we must
|
||||||
|
prove ownership of all of these domains through this method. The token is typically written to a TXT record at
|
||||||
|
-acme_challenge.domain.com. Once we create the appropriate TXT record(s), Lemur will try to validate propagation
|
||||||
|
before requesting that LetsEncrypt finalize the certificate request and send us the certificate.
|
||||||
|
|
||||||
|
.. figure:: letsencrypt_flow.png
|
||||||
|
|
||||||
|
To start issuing certificates through LetsEncrypt, you must enable Celery support within Lemur first. After doing so,
|
||||||
|
you need to create a LetsEncrypt authority. To do this, visit
|
||||||
|
Authorities -> Create. Set the applicable attributes and click "More Options".
|
||||||
|
|
||||||
|
.. figure:: letsencrypt_authority_1.png
|
||||||
|
|
||||||
|
You will need to set "Certificate" to LetsEncrypt's active chain of trust for the authority you want to use. To find
|
||||||
|
the active chain of trust at the time of writing, please visit `LetsEncrypt
|
||||||
|
<https://letsencrypt.org/certificates/>`_.
|
||||||
|
|
||||||
|
Under Acme_url, enter in the appropriate endpoint URL. Lemur supports LetsEncrypt's V2 API, and we recommend you to use
|
||||||
|
this. At the time of writing, the staging and production URLs for LetsEncrypt V2 are
|
||||||
|
https://acme-staging-v02.api.letsencrypt.org/directory and https://acme-v02.api.letsencrypt.org/directory.
|
||||||
|
|
||||||
|
.. figure:: letsencrypt_authority_2.png
|
||||||
|
|
||||||
|
After creating the authorities, we will need to create a DNS provider. Visit `Admin` -> `DNS Providers` and click
|
||||||
|
`Create`. Lemur comes with a few provider plugins built in, with different options. Create a DNS provider with the
|
||||||
|
appropriate choices.
|
||||||
|
|
||||||
|
.. figure:: create_dns_provider.png
|
||||||
|
|
||||||
|
By default, users will need to select the DNS provider that is authoritative over their domain in order for the
|
||||||
|
LetsEncrypt flow to function. However, Lemur will attempt to automatically determine the appropriate provider if
|
||||||
|
possible. To enable this functionality, periodically (or through Cron/Celery) run `lemur dns_providers get_all_zones`.
|
||||||
|
This command will traverse all DNS providers, determine which zones they control, and upload this list of zones to
|
||||||
|
Lemur's database (in the dns_providers table). Alternatively, you can manually input this data.
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 132 KiB |
Binary file not shown.
After Width: | Height: | Size: 218 KiB |
Binary file not shown.
After Width: | Height: | Size: 89 KiB |
|
@ -12,7 +12,7 @@ Dependencies
|
||||||
Some basic prerequisites which you'll need in order to run Lemur:
|
Some basic prerequisites which you'll need in order to run Lemur:
|
||||||
|
|
||||||
* A UNIX-based operating system (we test on Ubuntu, develop on OS X)
|
* A UNIX-based operating system (we test on Ubuntu, develop on OS X)
|
||||||
* Python 3.5 or greater
|
* Python 3.7 or greater
|
||||||
* PostgreSQL 9.4 or greater
|
* PostgreSQL 9.4 or greater
|
||||||
* Nginx
|
* Nginx
|
||||||
|
|
||||||
|
|
|
@ -5,7 +5,8 @@
|
||||||
:license: Apache, see LICENSE for more details.
|
:license: Apache, see LICENSE for more details.
|
||||||
|
|
||||||
.. moduleauthor:: Kevin Glisson <kglisson@netflix.com>
|
.. moduleauthor:: Kevin Glisson <kglisson@netflix.com>
|
||||||
|
.. moduleauthor:: Curtis Castrapel <ccastrapel@netflix.com>
|
||||||
|
.. moduleauthor:: Hossein Shafagh <hshafagh@netflix.com>
|
||||||
|
|
||||||
"""
|
"""
|
||||||
import time
|
import time
|
||||||
|
|
|
@ -9,6 +9,7 @@ command: celery -A lemur.common.celery worker --loglevel=info -l DEBUG -B
|
||||||
"""
|
"""
|
||||||
import copy
|
import copy
|
||||||
import sys
|
import sys
|
||||||
|
import time
|
||||||
from datetime import datetime, timezone, timedelta
|
from datetime import datetime, timezone, timedelta
|
||||||
|
|
||||||
from celery import Celery
|
from celery import Celery
|
||||||
|
@ -16,6 +17,7 @@ from celery.exceptions import SoftTimeLimitExceeded
|
||||||
from flask import current_app
|
from flask import current_app
|
||||||
|
|
||||||
from lemur.authorities.service import get as get_authority
|
from lemur.authorities.service import get as get_authority
|
||||||
|
from lemur.common.redis import RedisHandler
|
||||||
from lemur.destinations import service as destinations_service
|
from lemur.destinations import service as destinations_service
|
||||||
from lemur.extensions import metrics, sentry
|
from lemur.extensions import metrics, sentry
|
||||||
from lemur.factory import create_app
|
from lemur.factory import create_app
|
||||||
|
@ -30,6 +32,8 @@ if current_app:
|
||||||
else:
|
else:
|
||||||
flask_app = create_app()
|
flask_app = create_app()
|
||||||
|
|
||||||
|
red = RedisHandler().redis()
|
||||||
|
|
||||||
|
|
||||||
def make_celery(app):
|
def make_celery(app):
|
||||||
celery = Celery(
|
celery = Celery(
|
||||||
|
@ -68,6 +72,30 @@ def is_task_active(fun, task_id, args):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
@celery.task()
|
||||||
|
def report_celery_last_success_metrics():
|
||||||
|
"""
|
||||||
|
For each celery task, this will determine the number of seconds since it has last been successful.
|
||||||
|
|
||||||
|
Celery tasks should be emitting redis stats with a deterministic key (In our case, `f"{task}.last_success"`.
|
||||||
|
report_celery_last_success_metrics should be ran periodically to emit metrics on when a task was last successful.
|
||||||
|
Admins can then alert when tasks are not ran when intended. Admins should also alert when no metrics are emitted
|
||||||
|
from this function.
|
||||||
|
|
||||||
|
"""
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
|
current_time = int(time.time())
|
||||||
|
schedule = current_app.config.get('CELERYBEAT_SCHEDULE')
|
||||||
|
for _, t in schedule.items():
|
||||||
|
task = t.get("task")
|
||||||
|
last_success = int(red.get(f"{task}.last_success") or 0)
|
||||||
|
metrics.send(f"{task}.time_since_last_success", 'gauge', current_time - last_success)
|
||||||
|
red.set(
|
||||||
|
f"{function}.last_success", int(time.time())
|
||||||
|
) # Alert if this metric is not seen
|
||||||
|
metrics.send(f"{function}.success", 'counter', 1)
|
||||||
|
|
||||||
|
|
||||||
@celery.task(soft_time_limit=600)
|
@celery.task(soft_time_limit=600)
|
||||||
def fetch_acme_cert(id):
|
def fetch_acme_cert(id):
|
||||||
"""
|
"""
|
||||||
|
@ -80,8 +108,9 @@ def fetch_acme_cert(id):
|
||||||
if celery.current_task:
|
if celery.current_task:
|
||||||
task_id = celery.current_task.request.id
|
task_id = celery.current_task.request.id
|
||||||
|
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
log_data = {
|
log_data = {
|
||||||
"function": "{}.{}".format(__name__, sys._getframe().f_code.co_name),
|
"function": function,
|
||||||
"message": "Resolving pending certificate {}".format(id),
|
"message": "Resolving pending certificate {}".format(id),
|
||||||
"task_id": task_id,
|
"task_id": task_id,
|
||||||
"id": id,
|
"id": id,
|
||||||
|
@ -165,11 +194,15 @@ def fetch_acme_cert(id):
|
||||||
log_data["failed"] = failed
|
log_data["failed"] = failed
|
||||||
log_data["wrong_issuer"] = wrong_issuer
|
log_data["wrong_issuer"] = wrong_issuer
|
||||||
current_app.logger.debug(log_data)
|
current_app.logger.debug(log_data)
|
||||||
|
metrics.send(f"{function}.resolved", 'gauge', new)
|
||||||
|
metrics.send(f"{function}.failed", 'gauge', failed)
|
||||||
|
metrics.send(f"{function}.wrong_issuer", 'gauge', wrong_issuer)
|
||||||
print(
|
print(
|
||||||
"[+] Certificates: New: {new} Failed: {failed} Not using ACME: {wrong_issuer}".format(
|
"[+] Certificates: New: {new} Failed: {failed} Not using ACME: {wrong_issuer}".format(
|
||||||
new=new, failed=failed, wrong_issuer=wrong_issuer
|
new=new, failed=failed, wrong_issuer=wrong_issuer
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
red.set(f'{function}.last_success', int(time.time()))
|
||||||
|
|
||||||
|
|
||||||
@celery.task()
|
@celery.task()
|
||||||
|
@ -177,8 +210,9 @@ def fetch_all_pending_acme_certs():
|
||||||
"""Instantiate celery workers to resolve all pending Acme certificates"""
|
"""Instantiate celery workers to resolve all pending Acme certificates"""
|
||||||
pending_certs = pending_certificate_service.get_unresolved_pending_certs()
|
pending_certs = pending_certificate_service.get_unresolved_pending_certs()
|
||||||
|
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
log_data = {
|
log_data = {
|
||||||
"function": "{}.{}".format(__name__, sys._getframe().f_code.co_name),
|
"function": function,
|
||||||
"message": "Starting job.",
|
"message": "Starting job.",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -195,11 +229,18 @@ def fetch_all_pending_acme_certs():
|
||||||
current_app.logger.debug(log_data)
|
current_app.logger.debug(log_data)
|
||||||
fetch_acme_cert.delay(cert.id)
|
fetch_acme_cert.delay(cert.id)
|
||||||
|
|
||||||
|
red.set(f'{function}.last_success', int(time.time()))
|
||||||
|
metrics.send(f"{function}.success", 'counter', 1)
|
||||||
|
|
||||||
|
|
||||||
@celery.task()
|
@celery.task()
|
||||||
def remove_old_acme_certs():
|
def remove_old_acme_certs():
|
||||||
"""Prune old pending acme certificates from the database"""
|
"""Prune old pending acme certificates from the database"""
|
||||||
log_data = {"function": "{}.{}".format(__name__, sys._getframe().f_code.co_name)}
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
|
log_data = {
|
||||||
|
"function": function,
|
||||||
|
"message": "Starting job.",
|
||||||
|
}
|
||||||
pending_certs = pending_certificate_service.get_pending_certs("all")
|
pending_certs = pending_certificate_service.get_pending_certs("all")
|
||||||
|
|
||||||
# Delete pending certs more than a week old
|
# Delete pending certs more than a week old
|
||||||
|
@ -211,6 +252,9 @@ def remove_old_acme_certs():
|
||||||
current_app.logger.debug(log_data)
|
current_app.logger.debug(log_data)
|
||||||
pending_certificate_service.delete(cert)
|
pending_certificate_service.delete(cert)
|
||||||
|
|
||||||
|
red.set(f'{function}.last_success', int(time.time()))
|
||||||
|
metrics.send(f"{function}.success", 'counter', 1)
|
||||||
|
|
||||||
|
|
||||||
@celery.task()
|
@celery.task()
|
||||||
def clean_all_sources():
|
def clean_all_sources():
|
||||||
|
@ -218,6 +262,7 @@ def clean_all_sources():
|
||||||
This function will clean unused certificates from sources. This is a destructive operation and should only
|
This function will clean unused certificates from sources. This is a destructive operation and should only
|
||||||
be ran periodically. This function triggers one celery task per source.
|
be ran periodically. This function triggers one celery task per source.
|
||||||
"""
|
"""
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
sources = validate_sources("all")
|
sources = validate_sources("all")
|
||||||
for source in sources:
|
for source in sources:
|
||||||
current_app.logger.debug(
|
current_app.logger.debug(
|
||||||
|
@ -225,6 +270,9 @@ def clean_all_sources():
|
||||||
)
|
)
|
||||||
clean_source.delay(source.label)
|
clean_source.delay(source.label)
|
||||||
|
|
||||||
|
red.set(f'{function}.last_success', int(time.time()))
|
||||||
|
metrics.send(f"{function}.success", 'counter', 1)
|
||||||
|
|
||||||
|
|
||||||
@celery.task()
|
@celery.task()
|
||||||
def clean_source(source):
|
def clean_source(source):
|
||||||
|
@ -244,6 +292,7 @@ def sync_all_sources():
|
||||||
"""
|
"""
|
||||||
This function will sync certificates from all sources. This function triggers one celery task per source.
|
This function will sync certificates from all sources. This function triggers one celery task per source.
|
||||||
"""
|
"""
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
sources = validate_sources("all")
|
sources = validate_sources("all")
|
||||||
for source in sources:
|
for source in sources:
|
||||||
current_app.logger.debug(
|
current_app.logger.debug(
|
||||||
|
@ -251,6 +300,9 @@ def sync_all_sources():
|
||||||
)
|
)
|
||||||
sync_source.delay(source.label)
|
sync_source.delay(source.label)
|
||||||
|
|
||||||
|
red.set(f'{function}.last_success', int(time.time()))
|
||||||
|
metrics.send(f"{function}.success", 'counter', 1)
|
||||||
|
|
||||||
|
|
||||||
@celery.task(soft_time_limit=7200)
|
@celery.task(soft_time_limit=7200)
|
||||||
def sync_source(source):
|
def sync_source(source):
|
||||||
|
@ -261,7 +313,7 @@ def sync_source(source):
|
||||||
:return:
|
:return:
|
||||||
"""
|
"""
|
||||||
|
|
||||||
function = "{}.{}".format(__name__, sys._getframe().f_code.co_name)
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
task_id = None
|
task_id = None
|
||||||
if celery.current_task:
|
if celery.current_task:
|
||||||
task_id = celery.current_task.request.id
|
task_id = celery.current_task.request.id
|
||||||
|
@ -279,6 +331,7 @@ def sync_source(source):
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
sync([source])
|
sync([source])
|
||||||
|
metrics.send(f"{function}.success", 'counter', '1', metric_tags={"source": source})
|
||||||
except SoftTimeLimitExceeded:
|
except SoftTimeLimitExceeded:
|
||||||
log_data["message"] = "Error syncing source: Time limit exceeded."
|
log_data["message"] = "Error syncing source: Time limit exceeded."
|
||||||
current_app.logger.error(log_data)
|
current_app.logger.error(log_data)
|
||||||
|
@ -290,6 +343,8 @@ def sync_source(source):
|
||||||
|
|
||||||
log_data["message"] = "Done syncing source"
|
log_data["message"] = "Done syncing source"
|
||||||
current_app.logger.debug(log_data)
|
current_app.logger.debug(log_data)
|
||||||
|
metrics.send(f"{function}.success", 'counter', 1, metric_tags={"source": source})
|
||||||
|
red.set(f'{function}.last_success', int(time.time()))
|
||||||
|
|
||||||
|
|
||||||
@celery.task()
|
@celery.task()
|
||||||
|
@ -302,9 +357,12 @@ def sync_source_destination():
|
||||||
We rely on account numbers to avoid duplicates.
|
We rely on account numbers to avoid duplicates.
|
||||||
"""
|
"""
|
||||||
current_app.logger.debug("Syncing AWS destinations and sources")
|
current_app.logger.debug("Syncing AWS destinations and sources")
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
|
|
||||||
for dst in destinations_service.get_all():
|
for dst in destinations_service.get_all():
|
||||||
if add_aws_destination_to_sources(dst):
|
if add_aws_destination_to_sources(dst):
|
||||||
current_app.logger.debug("Source: %s added", dst.label)
|
current_app.logger.debug("Source: %s added", dst.label)
|
||||||
|
|
||||||
current_app.logger.debug("Completed Syncing AWS destinations and sources")
|
current_app.logger.debug("Completed Syncing AWS destinations and sources")
|
||||||
|
red.set(f'{function}.last_success', int(time.time()))
|
||||||
|
metrics.send(f"{function}.success", 'counter', 1)
|
||||||
|
|
|
@ -0,0 +1,52 @@
|
||||||
|
"""
|
||||||
|
Helper Class for Redis
|
||||||
|
|
||||||
|
"""
|
||||||
|
import redis
|
||||||
|
import sys
|
||||||
|
from flask import current_app
|
||||||
|
from lemur.extensions import sentry
|
||||||
|
from lemur.factory import create_app
|
||||||
|
|
||||||
|
if current_app:
|
||||||
|
flask_app = current_app
|
||||||
|
else:
|
||||||
|
flask_app = create_app()
|
||||||
|
|
||||||
|
|
||||||
|
class RedisHandler:
|
||||||
|
def __init__(self, host=flask_app.config.get('REDIS_HOST', 'localhost'),
|
||||||
|
port=flask_app.config.get('REDIS_PORT', 6379),
|
||||||
|
db=flask_app.config.get('REDIS_DB', 0)):
|
||||||
|
self.host = host
|
||||||
|
self.port = port
|
||||||
|
self.db = db
|
||||||
|
|
||||||
|
def redis(self, db=0):
|
||||||
|
# The decode_responses flag here directs the client to convert the responses from Redis into Python strings
|
||||||
|
# using the default encoding utf-8. This is client specific.
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
|
try:
|
||||||
|
red = redis.StrictRedis(host=self.host, port=self.port, db=self.db, encoding="utf-8", decode_responses=True)
|
||||||
|
red.set("test", 0)
|
||||||
|
except redis.ConnectionError:
|
||||||
|
log_data = {
|
||||||
|
"function": function,
|
||||||
|
"message": "Redis Connection error",
|
||||||
|
"host": self.host,
|
||||||
|
"port": self.port
|
||||||
|
}
|
||||||
|
current_app.logger.error(log_data)
|
||||||
|
sentry.captureException()
|
||||||
|
return red
|
||||||
|
|
||||||
|
|
||||||
|
def redis_get(key, default=None):
|
||||||
|
red = RedisHandler().redis()
|
||||||
|
try:
|
||||||
|
v = red.get(key)
|
||||||
|
except redis.exceptions.ConnectionError:
|
||||||
|
v = None
|
||||||
|
if not v:
|
||||||
|
return default
|
||||||
|
return v
|
|
@ -33,22 +33,22 @@ def get_dynect_session():
|
||||||
return dynect_session
|
return dynect_session
|
||||||
|
|
||||||
|
|
||||||
def _has_dns_propagated(name, token):
|
def _has_dns_propagated(fqdn, token):
|
||||||
txt_records = []
|
txt_records = []
|
||||||
try:
|
try:
|
||||||
dns_resolver = dns.resolver.Resolver()
|
dns_resolver = dns.resolver.Resolver()
|
||||||
dns_resolver.nameservers = [get_authoritative_nameserver(name)]
|
dns_resolver.nameservers = [get_authoritative_nameserver(fqdn)]
|
||||||
dns_response = dns_resolver.query(name, "TXT")
|
dns_response = dns_resolver.query(fqdn, "TXT")
|
||||||
for rdata in dns_response:
|
for rdata in dns_response:
|
||||||
for txt_record in rdata.strings:
|
for txt_record in rdata.strings:
|
||||||
txt_records.append(txt_record.decode("utf-8"))
|
txt_records.append(txt_record.decode("utf-8"))
|
||||||
except dns.exception.DNSException:
|
except dns.exception.DNSException:
|
||||||
metrics.send("has_dns_propagated_fail", "counter", 1)
|
metrics.send("has_dns_propagated_fail", "counter", 1, metric_tags={"dns": fqdn})
|
||||||
return False
|
return False
|
||||||
|
|
||||||
for txt_record in txt_records:
|
for txt_record in txt_records:
|
||||||
if txt_record == token:
|
if txt_record == token:
|
||||||
metrics.send("has_dns_propagated_success", "counter", 1)
|
metrics.send("has_dns_propagated_success", "counter", 1, metric_tags={"dns": fqdn})
|
||||||
return True
|
return True
|
||||||
|
|
||||||
return False
|
return False
|
||||||
|
@ -61,12 +61,12 @@ def wait_for_dns_change(change_id, account_number=None):
|
||||||
status = _has_dns_propagated(fqdn, token)
|
status = _has_dns_propagated(fqdn, token)
|
||||||
current_app.logger.debug("Record status for fqdn: {}: {}".format(fqdn, status))
|
current_app.logger.debug("Record status for fqdn: {}: {}".format(fqdn, status))
|
||||||
if status:
|
if status:
|
||||||
metrics.send("wait_for_dns_change_success", "counter", 1)
|
metrics.send("wait_for_dns_change_success", "counter", 1, metric_tags={"dns": fqdn})
|
||||||
break
|
break
|
||||||
time.sleep(10)
|
time.sleep(10)
|
||||||
if not status:
|
if not status:
|
||||||
# TODO: Delete associated DNS text record here
|
# TODO: Delete associated DNS text record here
|
||||||
metrics.send("wait_for_dns_change_fail", "counter", 1)
|
metrics.send("wait_for_dns_change_fail", "counter", 1, metric_tags={"dns": fqdn})
|
||||||
sentry.captureException(extra={"fqdn": str(fqdn), "txt_record": str(token)})
|
sentry.captureException(extra={"fqdn": str(fqdn), "txt_record": str(token)})
|
||||||
metrics.send(
|
metrics.send(
|
||||||
"wait_for_dns_change_error",
|
"wait_for_dns_change_error",
|
||||||
|
|
|
@ -0,0 +1,13 @@
|
||||||
|
import fakeredis
|
||||||
|
import time
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
|
def test_write_and_read_from_redis():
|
||||||
|
function = f"{__name__}.{sys._getframe().f_code.co_name}"
|
||||||
|
|
||||||
|
red = fakeredis.FakeStrictRedis()
|
||||||
|
key = f"{function}.last_success"
|
||||||
|
value = int(time.time())
|
||||||
|
assert red.set(key, value) is True
|
||||||
|
assert (int(red.get(key)) == value) is True
|
|
@ -6,31 +6,34 @@
|
||||||
#
|
#
|
||||||
aspy.yaml==1.3.0 # via pre-commit
|
aspy.yaml==1.3.0 # via pre-commit
|
||||||
bleach==3.1.0 # via readme-renderer
|
bleach==3.1.0 # via readme-renderer
|
||||||
certifi==2019.3.9 # via requests
|
certifi==2019.6.16 # via requests
|
||||||
cfgv==2.0.0 # via pre-commit
|
cfgv==2.0.0 # via pre-commit
|
||||||
chardet==3.0.4 # via requests
|
chardet==3.0.4 # via requests
|
||||||
docutils==0.14 # via readme-renderer
|
docutils==0.14 # via readme-renderer
|
||||||
flake8==3.5.0
|
flake8==3.5.0
|
||||||
identify==1.4.3 # via pre-commit
|
identify==1.4.5 # via pre-commit
|
||||||
idna==2.8 # via requests
|
idna==2.8 # via requests
|
||||||
importlib-metadata==0.17 # via pre-commit
|
importlib-metadata==0.18 # via pre-commit
|
||||||
invoke==1.2.0
|
invoke==1.2.0
|
||||||
mccabe==0.6.1 # via flake8
|
mccabe==0.6.1 # via flake8
|
||||||
nodeenv==1.3.3
|
nodeenv==1.3.3
|
||||||
pkginfo==1.5.0.1 # via twine
|
pkginfo==1.5.0.1 # via twine
|
||||||
pre-commit==1.16.1
|
pre-commit==1.17.0
|
||||||
pycodestyle==2.3.1 # via flake8
|
pycodestyle==2.3.1 # via flake8
|
||||||
pyflakes==1.6.0 # via flake8
|
pyflakes==1.6.0 # via flake8
|
||||||
pygments==2.4.2 # via readme-renderer
|
pygments==2.4.2 # via readme-renderer
|
||||||
pyyaml==5.1
|
pyyaml==5.1.1
|
||||||
readme-renderer==24.0 # via twine
|
readme-renderer==24.0 # via twine
|
||||||
requests-toolbelt==0.9.1 # via twine
|
requests-toolbelt==0.9.1 # via twine
|
||||||
requests==2.22.0 # via requests-toolbelt, twine
|
requests==2.22.0 # via requests-toolbelt, twine
|
||||||
six==1.12.0 # via bleach, cfgv, pre-commit, readme-renderer
|
six==1.12.0 # via bleach, cfgv, pre-commit, readme-renderer
|
||||||
toml==0.10.0 # via pre-commit
|
toml==0.10.0 # via pre-commit
|
||||||
tqdm==4.32.1 # via twine
|
tqdm==4.32.2 # via twine
|
||||||
twine==1.13.0
|
twine==1.13.0
|
||||||
urllib3==1.25.3 # via requests
|
urllib3==1.25.3 # via requests
|
||||||
virtualenv==16.6.0 # via pre-commit
|
virtualenv==16.6.1 # via pre-commit
|
||||||
webencodings==0.5.1 # via bleach
|
webencodings==0.5.1 # via bleach
|
||||||
zipp==0.5.1 # via importlib-metadata
|
zipp==0.5.2 # via importlib-metadata
|
||||||
|
|
||||||
|
# The following packages are considered to be unsafe in a requirements file:
|
||||||
|
# setuptools==41.0.1 # via twine
|
||||||
|
|
|
@ -4,23 +4,23 @@
|
||||||
#
|
#
|
||||||
# pip-compile --no-index --output-file=requirements-docs.txt requirements-docs.in
|
# pip-compile --no-index --output-file=requirements-docs.txt requirements-docs.in
|
||||||
#
|
#
|
||||||
acme==0.34.2
|
acme==0.36.0
|
||||||
alabaster==0.7.12 # via sphinx
|
alabaster==0.7.12 # via sphinx
|
||||||
alembic-autogenerate-enums==0.0.2
|
alembic-autogenerate-enums==0.0.2
|
||||||
alembic==1.0.10
|
alembic==1.0.11
|
||||||
amqp==2.5.0
|
amqp==2.5.0
|
||||||
aniso8601==6.0.0
|
aniso8601==7.0.0
|
||||||
arrow==0.14.2
|
arrow==0.14.2
|
||||||
asn1crypto==0.24.0
|
asn1crypto==0.24.0
|
||||||
asyncpool==1.0
|
asyncpool==1.0
|
||||||
babel==2.7.0 # via sphinx
|
babel==2.7.0 # via sphinx
|
||||||
bcrypt==3.1.6
|
bcrypt==3.1.7
|
||||||
billiard==3.6.0.0
|
billiard==3.6.0.0
|
||||||
blinker==1.4
|
blinker==1.4
|
||||||
boto3==1.9.160
|
boto3==1.9.187
|
||||||
botocore==1.12.160
|
botocore==1.12.187
|
||||||
celery[redis]==4.3.0
|
celery[redis]==4.3.0
|
||||||
certifi==2019.3.9
|
certifi==2019.6.16
|
||||||
certsrv==2.1.1
|
certsrv==2.1.1
|
||||||
cffi==1.12.3
|
cffi==1.12.3
|
||||||
chardet==3.0.4
|
chardet==3.0.4
|
||||||
|
@ -32,7 +32,7 @@ dnspython==1.15.0
|
||||||
docutils==0.14
|
docutils==0.14
|
||||||
dyn==1.8.1
|
dyn==1.8.1
|
||||||
flask-bcrypt==0.7.1
|
flask-bcrypt==0.7.1
|
||||||
flask-cors==3.0.7
|
flask-cors==3.0.8
|
||||||
flask-mail==0.9.1
|
flask-mail==0.9.1
|
||||||
flask-migrate==2.5.2
|
flask-migrate==2.5.2
|
||||||
flask-principal==0.4.0
|
flask-principal==0.4.0
|
||||||
|
@ -40,10 +40,10 @@ flask-replicated==1.3
|
||||||
flask-restful==0.3.7
|
flask-restful==0.3.7
|
||||||
flask-script==2.0.6
|
flask-script==2.0.6
|
||||||
flask-sqlalchemy==2.4.0
|
flask-sqlalchemy==2.4.0
|
||||||
flask==1.0.3
|
flask==1.1.1
|
||||||
future==0.17.1
|
future==0.17.1
|
||||||
gunicorn==19.9.0
|
gunicorn==19.9.0
|
||||||
hvac==0.9.1
|
hvac==0.9.3
|
||||||
idna==2.8
|
idna==2.8
|
||||||
imagesize==1.1.0 # via sphinx
|
imagesize==1.1.0 # via sphinx
|
||||||
inflection==0.3.1
|
inflection==0.3.1
|
||||||
|
@ -51,21 +51,21 @@ itsdangerous==1.1.0
|
||||||
javaobj-py3==0.3.0
|
javaobj-py3==0.3.0
|
||||||
jinja2==2.10.1
|
jinja2==2.10.1
|
||||||
jmespath==0.9.4
|
jmespath==0.9.4
|
||||||
josepy==1.1.0
|
josepy==1.2.0
|
||||||
jsonlines==1.2.0
|
jsonlines==1.2.0
|
||||||
kombu==4.5.0
|
kombu==4.5.0
|
||||||
lockfile==0.12.2
|
lockfile==0.12.2
|
||||||
logmatic-python==0.1.7
|
logmatic-python==0.1.7
|
||||||
mako==1.0.11
|
mako==1.0.13
|
||||||
markupsafe==1.1.1
|
markupsafe==1.1.1
|
||||||
marshmallow-sqlalchemy==0.16.3
|
marshmallow-sqlalchemy==0.17.0
|
||||||
marshmallow==2.19.2
|
marshmallow==2.19.5
|
||||||
mock==3.0.5
|
mock==3.0.5
|
||||||
ndg-httpsclient==0.5.1
|
ndg-httpsclient==0.5.1
|
||||||
packaging==19.0 # via sphinx
|
packaging==19.0 # via sphinx
|
||||||
paramiko==2.4.2
|
paramiko==2.6.0
|
||||||
pem==19.1.0
|
pem==19.1.0
|
||||||
psycopg2==2.8.2
|
psycopg2==2.8.3
|
||||||
pyasn1-modules==0.2.5
|
pyasn1-modules==0.2.5
|
||||||
pyasn1==0.4.5
|
pyasn1==0.4.5
|
||||||
pycparser==2.19
|
pycparser==2.19
|
||||||
|
@ -81,17 +81,17 @@ python-dateutil==2.8.0
|
||||||
python-editor==1.0.4
|
python-editor==1.0.4
|
||||||
python-json-logger==0.1.11
|
python-json-logger==0.1.11
|
||||||
pytz==2019.1
|
pytz==2019.1
|
||||||
pyyaml==5.1
|
pyyaml==5.1.1
|
||||||
raven[flask]==6.10.0
|
raven[flask]==6.10.0
|
||||||
redis==3.2.1
|
redis==3.2.1
|
||||||
requests-toolbelt==0.9.1
|
requests-toolbelt==0.9.1
|
||||||
requests[security]==2.22.0
|
requests[security]==2.22.0
|
||||||
retrying==1.3.3
|
retrying==1.3.3
|
||||||
s3transfer==0.2.0
|
s3transfer==0.2.1
|
||||||
six==1.12.0
|
six==1.12.0
|
||||||
snowballstemmer==1.2.1 # via sphinx
|
snowballstemmer==1.9.0 # via sphinx
|
||||||
sphinx-rtd-theme==0.4.3
|
sphinx-rtd-theme==0.4.3
|
||||||
sphinx==2.1.0
|
sphinx==2.1.2
|
||||||
sphinxcontrib-applehelp==1.0.1 # via sphinx
|
sphinxcontrib-applehelp==1.0.1 # via sphinx
|
||||||
sphinxcontrib-devhelp==1.0.1 # via sphinx
|
sphinxcontrib-devhelp==1.0.1 # via sphinx
|
||||||
sphinxcontrib-htmlhelp==1.0.2 # via sphinx
|
sphinxcontrib-htmlhelp==1.0.2 # via sphinx
|
||||||
|
@ -99,11 +99,14 @@ sphinxcontrib-httpdomain==1.7.0
|
||||||
sphinxcontrib-jsmath==1.0.1 # via sphinx
|
sphinxcontrib-jsmath==1.0.1 # via sphinx
|
||||||
sphinxcontrib-qthelp==1.0.2 # via sphinx
|
sphinxcontrib-qthelp==1.0.2 # via sphinx
|
||||||
sphinxcontrib-serializinghtml==1.1.3 # via sphinx
|
sphinxcontrib-serializinghtml==1.1.3 # via sphinx
|
||||||
sqlalchemy-utils==0.33.11
|
sqlalchemy-utils==0.34.0
|
||||||
sqlalchemy==1.3.4
|
sqlalchemy==1.3.5
|
||||||
tabulate==0.8.3
|
tabulate==0.8.3
|
||||||
twofish==0.3.0
|
twofish==0.3.0
|
||||||
urllib3==1.25.3
|
urllib3==1.25.3
|
||||||
vine==1.3.0
|
vine==1.3.0
|
||||||
werkzeug==0.15.4
|
werkzeug==0.15.4
|
||||||
xmltodict==0.12.0
|
xmltodict==0.12.0
|
||||||
|
|
||||||
|
# The following packages are considered to be unsafe in a requirements file:
|
||||||
|
# setuptools==41.0.1 # via acme, josepy, sphinx
|
||||||
|
|
|
@ -5,6 +5,7 @@ black
|
||||||
coverage
|
coverage
|
||||||
factory-boy
|
factory-boy
|
||||||
Faker
|
Faker
|
||||||
|
fakeredis
|
||||||
freezegun
|
freezegun
|
||||||
moto
|
moto
|
||||||
nose
|
nose
|
||||||
|
|
|
@ -7,33 +7,35 @@
|
||||||
appdirs==1.4.3 # via black
|
appdirs==1.4.3 # via black
|
||||||
asn1crypto==0.24.0 # via cryptography
|
asn1crypto==0.24.0 # via cryptography
|
||||||
atomicwrites==1.3.0 # via pytest
|
atomicwrites==1.3.0 # via pytest
|
||||||
attrs==19.1.0 # via black, pytest
|
attrs==19.1.0 # via black, jsonschema, pytest
|
||||||
aws-sam-translator==1.11.0 # via cfn-lint
|
aws-sam-translator==1.12.0 # via cfn-lint
|
||||||
aws-xray-sdk==2.4.2 # via moto
|
aws-xray-sdk==2.4.2 # via moto
|
||||||
bandit==1.6.0
|
bandit==1.6.2
|
||||||
black==19.3b0
|
black==19.3b0
|
||||||
boto3==1.9.160 # via aws-sam-translator, moto
|
boto3==1.9.187 # via aws-sam-translator, moto
|
||||||
boto==2.49.0 # via moto
|
boto==2.49.0 # via moto
|
||||||
botocore==1.12.160 # via aws-xray-sdk, boto3, moto, s3transfer
|
botocore==1.12.187 # via aws-xray-sdk, boto3, moto, s3transfer
|
||||||
certifi==2019.3.9 # via requests
|
certifi==2019.6.16 # via requests
|
||||||
cffi==1.12.3 # via cryptography
|
cffi==1.12.3 # via cryptography
|
||||||
cfn-lint==0.21.4 # via moto
|
cfn-lint==0.22.2 # via moto
|
||||||
chardet==3.0.4 # via requests
|
chardet==3.0.4 # via requests
|
||||||
click==7.0 # via black, flask
|
click==7.0 # via black, flask
|
||||||
coverage==4.5.3
|
coverage==4.5.3
|
||||||
cryptography==2.7 # via moto
|
cryptography==2.7 # via moto, sshpubkeys
|
||||||
docker==4.0.1 # via moto
|
datetime==4.3 # via moto
|
||||||
|
docker==4.0.2 # via moto
|
||||||
docutils==0.14 # via botocore
|
docutils==0.14 # via botocore
|
||||||
ecdsa==0.13.2 # via python-jose
|
ecdsa==0.13.2 # via python-jose, sshpubkeys
|
||||||
factory-boy==2.12.0
|
factory-boy==2.12.0
|
||||||
faker==1.0.7
|
faker==1.0.7
|
||||||
flask==1.0.3 # via pytest-flask
|
fakeredis==1.0.3
|
||||||
|
flask==1.1.1 # via pytest-flask
|
||||||
freezegun==0.3.12
|
freezegun==0.3.12
|
||||||
future==0.17.1 # via aws-xray-sdk, python-jose
|
future==0.17.1 # via aws-xray-sdk, python-jose
|
||||||
gitdb2==2.0.5 # via gitpython
|
gitdb2==2.0.5 # via gitpython
|
||||||
gitpython==2.1.11 # via bandit
|
gitpython==2.1.11 # via bandit
|
||||||
idna==2.8 # via moto, requests
|
idna==2.8 # via moto, requests
|
||||||
importlib-metadata==0.17 # via pluggy, pytest
|
importlib-metadata==0.18 # via pluggy, pytest
|
||||||
itsdangerous==1.1.0 # via flask
|
itsdangerous==1.1.0 # via flask
|
||||||
jinja2==2.10.1 # via flask, moto
|
jinja2==2.10.1 # via flask, moto
|
||||||
jmespath==0.9.4 # via boto3, botocore
|
jmespath==0.9.4 # via boto3, botocore
|
||||||
|
@ -41,34 +43,38 @@ jsondiff==1.1.2 # via moto
|
||||||
jsonpatch==1.23 # via cfn-lint
|
jsonpatch==1.23 # via cfn-lint
|
||||||
jsonpickle==1.2 # via aws-xray-sdk
|
jsonpickle==1.2 # via aws-xray-sdk
|
||||||
jsonpointer==2.0 # via jsonpatch
|
jsonpointer==2.0 # via jsonpatch
|
||||||
jsonschema==2.6.0 # via aws-sam-translator, cfn-lint
|
jsonschema==3.0.1 # via aws-sam-translator, cfn-lint
|
||||||
markupsafe==1.1.1 # via jinja2
|
markupsafe==1.1.1 # via jinja2
|
||||||
mock==3.0.5 # via moto
|
mock==3.0.5 # via moto
|
||||||
more-itertools==7.0.0 # via pytest
|
more-itertools==7.1.0 # via pytest
|
||||||
moto==1.3.8
|
moto==1.3.11
|
||||||
nose==1.3.7
|
nose==1.3.7
|
||||||
packaging==19.0 # via pytest
|
packaging==19.0 # via pytest
|
||||||
pbr==5.2.1 # via stevedore
|
pbr==5.4.0 # via stevedore
|
||||||
pluggy==0.12.0 # via pytest
|
pluggy==0.12.0 # via pytest
|
||||||
py==1.8.0 # via pytest
|
py==1.8.0 # via pytest
|
||||||
pyasn1==0.4.5 # via rsa
|
pyasn1==0.4.5 # via rsa
|
||||||
pycparser==2.19 # via cffi
|
pycparser==2.19 # via cffi
|
||||||
pyflakes==2.1.1
|
pyflakes==2.1.1
|
||||||
pyparsing==2.4.0 # via packaging
|
pyparsing==2.4.0 # via packaging
|
||||||
|
pyrsistent==0.15.3 # via jsonschema
|
||||||
pytest-flask==0.15.0
|
pytest-flask==0.15.0
|
||||||
pytest-mock==1.10.4
|
pytest-mock==1.10.4
|
||||||
pytest==4.6.2
|
pytest==5.0.1
|
||||||
python-dateutil==2.8.0 # via botocore, faker, freezegun, moto
|
python-dateutil==2.8.0 # via botocore, faker, freezegun, moto
|
||||||
python-jose==3.0.1 # via moto
|
python-jose==3.0.1 # via moto
|
||||||
pytz==2019.1 # via moto
|
pytz==2019.1 # via datetime, moto
|
||||||
pyyaml==5.1
|
pyyaml==5.1.1
|
||||||
|
redis==3.2.1 # via fakeredis
|
||||||
requests-mock==1.6.0
|
requests-mock==1.6.0
|
||||||
requests==2.22.0 # via cfn-lint, docker, moto, requests-mock, responses
|
requests==2.22.0 # via cfn-lint, docker, moto, requests-mock, responses
|
||||||
responses==0.10.6 # via moto
|
responses==0.10.6 # via moto
|
||||||
rsa==4.0 # via python-jose
|
rsa==4.0 # via python-jose
|
||||||
s3transfer==0.2.0 # via boto3
|
s3transfer==0.2.1 # via boto3
|
||||||
six==1.12.0 # via aws-sam-translator, bandit, cfn-lint, cryptography, docker, faker, freezegun, mock, moto, packaging, pytest, python-dateutil, python-jose, requests-mock, responses, stevedore, websocket-client
|
six==1.12.0 # via aws-sam-translator, bandit, cfn-lint, cryptography, docker, faker, freezegun, jsonschema, mock, moto, packaging, pyrsistent, python-dateutil, python-jose, requests-mock, responses, stevedore, websocket-client
|
||||||
smmap2==2.0.5 # via gitdb2
|
smmap2==2.0.5 # via gitdb2
|
||||||
|
sshpubkeys==3.1.0 # via moto
|
||||||
|
sortedcontainers==2.1.0 # via fakeredis
|
||||||
stevedore==1.30.1 # via bandit
|
stevedore==1.30.1 # via bandit
|
||||||
text-unidecode==1.2 # via faker
|
text-unidecode==1.2 # via faker
|
||||||
toml==0.10.0 # via black
|
toml==0.10.0 # via black
|
||||||
|
@ -76,6 +82,10 @@ urllib3==1.25.3 # via botocore, requests
|
||||||
wcwidth==0.1.7 # via pytest
|
wcwidth==0.1.7 # via pytest
|
||||||
websocket-client==0.56.0 # via docker
|
websocket-client==0.56.0 # via docker
|
||||||
werkzeug==0.15.4 # via flask, moto, pytest-flask
|
werkzeug==0.15.4 # via flask, moto, pytest-flask
|
||||||
wrapt==1.11.1 # via aws-xray-sdk
|
wrapt==1.11.2 # via aws-xray-sdk
|
||||||
xmltodict==0.12.0 # via moto
|
xmltodict==0.12.0 # via moto
|
||||||
zipp==0.5.1 # via importlib-metadata
|
zipp==0.5.2 # via importlib-metadata
|
||||||
|
zope.interface==4.6.0 # via datetime
|
||||||
|
|
||||||
|
# The following packages are considered to be unsafe in a requirements file:
|
||||||
|
# setuptools==41.0.1 # via cfn-lint, jsonschema, zope.interface
|
||||||
|
|
|
@ -4,21 +4,21 @@
|
||||||
#
|
#
|
||||||
# pip-compile --no-index --output-file=requirements.txt requirements.in
|
# pip-compile --no-index --output-file=requirements.txt requirements.in
|
||||||
#
|
#
|
||||||
acme==0.34.2
|
acme==0.36.0
|
||||||
alembic-autogenerate-enums==0.0.2
|
alembic-autogenerate-enums==0.0.2
|
||||||
alembic==1.0.10 # via flask-migrate
|
alembic==1.0.11 # via flask-migrate
|
||||||
amqp==2.5.0 # via kombu
|
amqp==2.5.0 # via kombu
|
||||||
aniso8601==6.0.0 # via flask-restful
|
aniso8601==7.0.0 # via flask-restful
|
||||||
arrow==0.14.2
|
arrow==0.14.2
|
||||||
asn1crypto==0.24.0 # via cryptography
|
asn1crypto==0.24.0 # via cryptography
|
||||||
asyncpool==1.0
|
asyncpool==1.0
|
||||||
bcrypt==3.1.6 # via flask-bcrypt, paramiko
|
bcrypt==3.1.7 # via flask-bcrypt, paramiko
|
||||||
billiard==3.6.0.0 # via celery
|
billiard==3.6.0.0 # via celery
|
||||||
blinker==1.4 # via flask-mail, flask-principal, raven
|
blinker==1.4 # via flask-mail, flask-principal, raven
|
||||||
boto3==1.9.160
|
boto3==1.9.187
|
||||||
botocore==1.12.160
|
botocore==1.12.187
|
||||||
celery[redis]==4.3.0
|
celery[redis]==4.3.0
|
||||||
certifi==2019.3.9
|
certifi==2019.6.16
|
||||||
certsrv==2.1.1
|
certsrv==2.1.1
|
||||||
cffi==1.12.3 # via bcrypt, cryptography, pynacl
|
cffi==1.12.3 # via bcrypt, cryptography, pynacl
|
||||||
chardet==3.0.4 # via requests
|
chardet==3.0.4 # via requests
|
||||||
|
@ -30,7 +30,7 @@ dnspython==1.15.0 # via dnspython3
|
||||||
docutils==0.14 # via botocore
|
docutils==0.14 # via botocore
|
||||||
dyn==1.8.1
|
dyn==1.8.1
|
||||||
flask-bcrypt==0.7.1
|
flask-bcrypt==0.7.1
|
||||||
flask-cors==3.0.7
|
flask-cors==3.0.8
|
||||||
flask-mail==0.9.1
|
flask-mail==0.9.1
|
||||||
flask-migrate==2.5.2
|
flask-migrate==2.5.2
|
||||||
flask-principal==0.4.0
|
flask-principal==0.4.0
|
||||||
|
@ -38,32 +38,32 @@ flask-replicated==1.3
|
||||||
flask-restful==0.3.7
|
flask-restful==0.3.7
|
||||||
flask-script==2.0.6
|
flask-script==2.0.6
|
||||||
flask-sqlalchemy==2.4.0
|
flask-sqlalchemy==2.4.0
|
||||||
flask==1.0.3
|
flask==1.1.1
|
||||||
future==0.17.1
|
future==0.17.1
|
||||||
gunicorn==19.9.0
|
gunicorn==19.9.0
|
||||||
hvac==0.9.1
|
hvac==0.9.3
|
||||||
idna==2.8 # via requests
|
idna==2.8 # via requests
|
||||||
inflection==0.3.1
|
inflection==0.3.1
|
||||||
itsdangerous==1.1.0 # via flask
|
itsdangerous==1.1.0 # via flask
|
||||||
javaobj-py3==0.3.0 # via pyjks
|
javaobj-py3==0.3.0 # via pyjks
|
||||||
jinja2==2.10.1
|
jinja2==2.10.1
|
||||||
jmespath==0.9.4 # via boto3, botocore
|
jmespath==0.9.4 # via boto3, botocore
|
||||||
josepy==1.1.0 # via acme
|
josepy==1.2.0 # via acme
|
||||||
jsonlines==1.2.0 # via cloudflare
|
jsonlines==1.2.0 # via cloudflare
|
||||||
kombu==4.5.0
|
kombu==4.5.0
|
||||||
lockfile==0.12.2
|
lockfile==0.12.2
|
||||||
logmatic-python==0.1.7
|
logmatic-python==0.1.7
|
||||||
mako==1.0.11 # via alembic
|
mako==1.0.13 # via alembic
|
||||||
markupsafe==1.1.1 # via jinja2, mako
|
markupsafe==1.1.1 # via jinja2, mako
|
||||||
marshmallow-sqlalchemy==0.16.3
|
marshmallow-sqlalchemy==0.17.0
|
||||||
marshmallow==2.19.2
|
marshmallow==2.19.5
|
||||||
mock==3.0.5 # via acme
|
mock==3.0.5 # via acme
|
||||||
ndg-httpsclient==0.5.1
|
ndg-httpsclient==0.5.1
|
||||||
paramiko==2.4.2
|
paramiko==2.6.0
|
||||||
pem==19.1.0
|
pem==19.1.0
|
||||||
psycopg2==2.8.2
|
psycopg2==2.8.3
|
||||||
pyasn1-modules==0.2.5 # via pyjks, python-ldap
|
pyasn1-modules==0.2.5 # via pyjks, python-ldap
|
||||||
pyasn1==0.4.5 # via ndg-httpsclient, paramiko, pyasn1-modules, pyjks, python-ldap
|
pyasn1==0.4.5 # via ndg-httpsclient, pyasn1-modules, pyjks, python-ldap
|
||||||
pycparser==2.19 # via cffi
|
pycparser==2.19 # via cffi
|
||||||
pycryptodomex==3.8.2 # via pyjks
|
pycryptodomex==3.8.2 # via pyjks
|
||||||
pyjks==19.0.0
|
pyjks==19.0.0
|
||||||
|
@ -76,19 +76,22 @@ python-editor==1.0.4 # via alembic
|
||||||
python-json-logger==0.1.11 # via logmatic-python
|
python-json-logger==0.1.11 # via logmatic-python
|
||||||
python-ldap==3.2.0
|
python-ldap==3.2.0
|
||||||
pytz==2019.1 # via acme, celery, flask-restful, pyrfc3339
|
pytz==2019.1 # via acme, celery, flask-restful, pyrfc3339
|
||||||
pyyaml==5.1
|
pyyaml==5.1.1
|
||||||
raven[flask]==6.10.0
|
raven[flask]==6.10.0
|
||||||
redis==3.2.1
|
redis==3.2.1
|
||||||
requests-toolbelt==0.9.1 # via acme
|
requests-toolbelt==0.9.1 # via acme
|
||||||
requests[security]==2.22.0
|
requests[security]==2.22.0
|
||||||
retrying==1.3.3
|
retrying==1.3.3
|
||||||
s3transfer==0.2.0 # via boto3
|
s3transfer==0.2.1 # via boto3
|
||||||
six==1.12.0
|
six==1.12.0
|
||||||
sqlalchemy-utils==0.33.11
|
sqlalchemy-utils==0.34.0
|
||||||
sqlalchemy==1.3.4 # via alembic, flask-sqlalchemy, marshmallow-sqlalchemy, sqlalchemy-utils
|
sqlalchemy==1.3.5 # via alembic, flask-sqlalchemy, marshmallow-sqlalchemy, sqlalchemy-utils
|
||||||
tabulate==0.8.3
|
tabulate==0.8.3
|
||||||
twofish==0.3.0 # via pyjks
|
twofish==0.3.0 # via pyjks
|
||||||
urllib3==1.25.3 # via botocore, requests
|
urllib3==1.25.3 # via botocore, requests
|
||||||
vine==1.3.0 # via amqp, celery
|
vine==1.3.0 # via amqp, celery
|
||||||
werkzeug==0.15.4 # via flask
|
werkzeug==0.15.4 # via flask
|
||||||
xmltodict==0.12.0
|
xmltodict==0.12.0
|
||||||
|
|
||||||
|
# The following packages are considered to be unsafe in a requirements file:
|
||||||
|
# setuptools==41.0.1 # via acme, josepy
|
||||||
|
|
Loading…
Reference in New Issue