language stringclasses 1
value | file_name stringlengths 5 79 | commit_id stringlengths 40 40 | n_whitespaces int64 4 10.8k | d_id int64 12 121k | ast_levels int64 6 36 | path stringlengths 7 134 | ast_errors stringlengths 0 3.11k | repo stringlengths 3 22 | url stringlengths 31 59 | n_ast_errors int64 0 9 | fun_name stringlengths 1 80 | complexity int64 1 139 | documentation dict | n_identifiers int64 2 176 | nloc int64 1 413 | commit_message stringlengths 3 15.3k | n_ast_nodes int64 17 19.2k | vocab_size int64 4 1.11k | code stringlengths 101 62.2k | n_words int64 4 4.82k | token_counts int64 5 3.06k | id int64 20 338k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Python | dd_controller.py | ea964109d654394cc0a5237e6ec5510ba6404097 | 130 | 83,548 | 11 | gamestonk_terminal/cryptocurrency/due_diligence/dd_controller.py | OpenBBTerminal | https://github.com/OpenBB-finance/OpenBBTerminal.git | 0 | call_bc | 2 | {
"docstring": "Process bc command\n Blockchain explorers URLs for loaded coin. Those are sites like etherescan.io or polkascan.io\n in which you can see all blockchain data e.g. all txs, all tokens, all contracts...\n ",
"language": "en",
"n_whitespaces": 84,
... | 18 | 15 | Crypto menu refactor (#1119)
* enabled some crypto commands in dd to be called independent of source loaded
* support for coin_map_df in all dd functions + load ta and plot chart refactor
* updated tests and removed coingecko scrapping where possible
* removed ref of command from hugo
* updated pycoingecko... | 97 | 20 | def call_bc(self, other_args):
parser = argparse.ArgumentParser(
add_help=False,
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
prog="bc",
description=,
)
ns_parser = parse_known_args_and_warn(
parser, other_args, EXPO... | 22 | 61 | 281,136 | |
Python | conftest.py | fa2ad657482aca9dc628e6d7062b8badf2706bb6 | 37 | 5,330 | 10 | tests/conftest.py | spotify-downloader | https://github.com/spotDL/spotify-downloader.git | 0 | communicate | 1 | {
"docstring": "\n Ensure that the file has been download, and create empty output file,\n to avoid infinite loop.\n ",
"language": "en",
"n_whitespaces": 38,
"n_words": 16,
"vocab_size": 16
} | 7 | 4 | v4 init | 57 | 9 | async def communicate(self):
assert self._input.is_file()
self._output.open("w").close()
return (None, None)
| 9 | 32 | 30,126 | |
Python | test_actor_reuse.py | 1510fb2cd631b2776092fb45ee4082e5e65f16f8 | 141 | 31,314 | 15 | python/ray/tune/tests/test_actor_reuse.py | ray | https://github.com/ray-project/ray.git | 0 | test_multi_trial_reuse_with_failing | 1 | {
"docstring": "Test that failing trial's actors are not reused.\n\n - 2 trials can run at the same time\n - Trial 1 succeeds, trial 2 fails\n - Trial 3 will be scheduled after trial 2 failed, so won't reuse actor\n - Trial 4 will be scheduled after trial 1 succeeded, so will reuse actor\n ",
"langua... | 19 | 17 | [air/tune] Internal resource management 2 - Ray Tune to use new Ray AIR resource manager (#30016)
Includes/depends on #30777
TLDR: This PR refactors Ray Tune's resource management to use a central AIR resource management package instead of the tightly coupled PlacementGroupManager.
Ray Tune's resource management... | 183 | 36 | def test_multi_trial_reuse_with_failing(ray_start_4_cpus_extra):
os.environ["TUNE_MAX_PENDING_TRIALS_PG"] = "2"
register_trainable("foo2", MyResettableClass)
[trial1, trial2, trial3, trial4] = tune.run(
"foo2",
config={
"fail": tune.grid_search([False, True, False, False])... | 42 | 113 | 138,092 | |
Python | block_list.py | b5b4460932505912d88d65134931e0da170fb467 | 138 | 33,465 | 11 | python/ray/data/impl/block_list.py | ray | https://github.com/ray-project/ray.git | 0 | size_bytes | 4 | {
"docstring": "Returns the total size in bytes of the blocks, or -1 if not known.",
"language": "en",
"n_whitespaces": 13,
"n_words": 14,
"vocab_size": 13
} | 7 | 12 | Support creating a DatasetPipeline windowed by bytes (#22577) | 84 | 24 | def size_bytes(self) -> int:
size = 0
has_size = False
for m in self.get_metadata():
if m.size_bytes is not None:
has_size = True
size += m.size_bytes
if not has_size:
return -1
else:
return size
| 33 | 50 | 145,482 | |
Python | colors.py | 0abe0ce2f2748d1d0383154d045da3609a4b871b | 66 | 23,145 | 11 | lib/matplotlib/colors.py | matplotlib | https://github.com/matplotlib/matplotlib.git | 0 | unregister | 2 | {
"docstring": "\n Remove a sequence from the registry.\n\n You cannot remove built-in color sequences.\n\n If the name is not registered, returns with no error.\n ",
"language": "en",
"n_whitespaces": 51,
"n_words": 22,
"vocab_size": 21
} | 8 | 5 | Add a registry for color sequences
Color sequences are simply lists of colors, that we store by name in
a registry. The registry is modelled similar to the ColormapRegistry
to 1) support immutable builtin color sequences and 2) to return copies
so that one cannot mess with the global definition of the color sequence
t... | 65 | 20 | def unregister(self, name):
if name in self._BUILTIN_COLOR_SEQUENCES:
raise ValueError(
f"Cannot unregister builtin color sequence {name!r}")
self._color_sequences.pop(name, None)
_color_sequences = ColorSequenceRegistry()
| 20 | 31 | 108,335 | |
Python | _bootstrap_external.py | 8198943edd73a363c266633e1aa5b2a9e9c9f526 | 366 | 55,140 | 15 | python3.10.4/Lib/importlib/_bootstrap_external.py | XX-Net | https://github.com/XX-net/XX-Net.git | 0 | source_from_cache | 10 | {
"docstring": "Given the path to a .pyc. file, return the path to its .py file.\n\n The .pyc file does not need to exist; this simply returns the path to\n the .py file calculated to correspond to the .pyc file. If path does\n not conform to PEP 3147/488 format, ValueError will be raised. If\n sys.imple... | 33 | 30 | add python 3.10.4 for windows | 378 | 79 | def source_from_cache(path):
if sys.implementation.cache_tag is None:
raise NotImplementedError('sys.implementation.cache_tag is None')
path = _os.fspath(path)
head, pycache_filename = _path_split(path)
found_in_pycache_prefix = False
if sys.pycache_prefix is not None:
stripped_... | 121 | 212 | 218,114 | |
Python | test_dag_serialization.py | 49e336ae0302b386a2f47269a6d13988382d975f | 256 | 9,201 | 12 | tests/serialization/test_dag_serialization.py | airflow | https://github.com/apache/airflow.git | 0 | test_deps_sorted | 1 | {
"docstring": "\n Tests serialize_operator, make sure the deps is in order\n ",
"language": "en",
"n_whitespaces": 24,
"n_words": 9,
"vocab_size": 9
} | 25 | 21 | Replace usage of `DummyOperator` with `EmptyOperator` (#22974)
* Replace usage of `DummyOperator` with `EmptyOperator` | 186 | 40 | def test_deps_sorted(self):
from airflow.operators.empty import EmptyOperator
from airflow.sensors.external_task import ExternalTaskSensor
execution_date = datetime(2020, 1, 1)
with DAG(dag_id="test_deps_sorted", start_date=execution_date) as dag:
task1 = ExternalTa... | 49 | 109 | 47,665 | |
Python | workflow_storage.py | f67871c1f7e79adc727b2a15311d9332832d2e8a | 18 | 31,892 | 8 | python/ray/workflow/workflow_storage.py | ray | https://github.com/ray-project/ray.git | 0 | load_workflow_status | 1 | {
"docstring": "Load workflow status. If we find the previous status updating failed,\n fix it with redo-log transaction recovery.",
"language": "en",
"n_whitespaces": 23,
"n_words": 17,
"vocab_size": 17
} | 4 | 2 | [workflow] Fast workflow indexing (#24767)
* workflow indexing
* simplify workflow storage API
* Only fix workflow status when updating the status.
* support status filter | 30 | 4 | def load_workflow_status(self):
return self._status_storage.load_workflow_status(self._workflow_id)
| 4 | 17 | 140,203 | |
Python | PythonFlavors.py | c723f658e8c11ec92d6ef90c2f42527c67d3f318 | 48 | 42,789 | 9 | nuitka/PythonFlavors.py | Nuitka | https://github.com/Nuitka/Nuitka.git | 0 | isCPythonOfficialPackage | 3 | {
"docstring": "Official CPython download, kind of hard to detect since self-compiled doesn't change much.",
"language": "en",
"n_whitespaces": 12,
"n_words": 13,
"vocab_size": 13
} | 5 | 6 | Added CPython Official flavor, so far only detected on macOS | 44 | 18 | def isCPythonOfficialPackage():
# For macOS however, it's very knowable.
if isMacOS() and sys.executable.startswith(
"/Library/Frameworks/Python.framework/Versions/"
):
return True
return False
| 19 | 23 | 178,678 | |
Python | post_process.py | bc59434031930199dcdc056943c2ba4a17bbd5c8 | 126 | 18,118 | 15 | src/sentry/tasks/post_process.py | sentry | https://github.com/getsentry/sentry.git | 0 | update_existing_attachments | 2 | {
"docstring": "\n Attaches the group_id to all event attachments that were either:\n\n 1) ingested prior to the event via the standalone attachment endpoint.\n 2) part of a different group before reprocessing started.\n ",
"language": "en",
"n_whitespaces": 43,
"n_words": 30,
"vocab_size": 26
} | 18 | 10 | ref(perf-issues): Modularize post_process_group (ISP-11) (#39594)
Fully modularizes `post_process_group` as final step before adding
multiple event types to it. | 116 | 33 | def update_existing_attachments(job):
# Patch attachments that were ingested on the standalone path.
with sentry_sdk.start_span(op="tasks.post_process_group.update_existing_attachments"):
try:
from sentry.models import EventAttachment
event = job["event"]
Event... | 33 | 66 | 86,527 | |
Python | finders.py | 9c19aff7c7561e3a82978a272ecdaad40dda5c00 | 46 | 50,701 | 8 | django/contrib/staticfiles/finders.py | django | https://github.com/django/django.git | 0 | list | 1 | {
"docstring": "\n Given an optional list of paths to ignore, return a two item iterable\n consisting of the relative path and storage instance.\n ",
"language": "en",
"n_whitespaces": 43,
"n_words": 21,
"vocab_size": 20
} | 4 | 4 | Refs #33476 -- Reformatted code with Black. | 25 | 14 | def list(self, ignore_patterns):
raise NotImplementedError(
"subclasses of BaseFinder must provide a list() method"
)
| 14 | 13 | 204,338 | |
Python | test_async_job.py | a3aae8017a0a40ff2006e2567f71dccb04c997a5 | 65 | 563 | 10 | airbyte-integrations/connectors/source-facebook-marketing/unit_tests/test_async_job.py | airbyte | https://github.com/airbytehq/airbyte.git | 0 | test_less_jobs | 2 | {
"docstring": "Should update all jobs when number of jobs less than max size of batch",
"language": "en",
"n_whitespaces": 13,
"n_words": 14,
"vocab_size": 12
} | 16 | 6 | 🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805)
* Facebook Marketing performance improvement
* add comments and little refactoring
* fix integration tests with the new config
* improve job status handling, limit concurrency to 10
* fix campaign jobs, refactor manager
* big refactori... | 93 | 20 | def test_less_jobs(self, api, started_job, batch):
jobs = [started_job for _ in range(49)]
update_in_batch(api=api, jobs=jobs)
assert started_job.update_job.call_count == 49
assert len(api.new_batch.return_value) == 49
batch.execute.assert_called_once()
| 23 | 60 | 3,800 | |
Python | config_test.py | dd9084523e365e637443ea351eaaaa25f52d8412 | 257 | 26,359 | 13 | lib/tests/streamlit/config_test.py | streamlit | https://github.com/streamlit/streamlit.git | 0 | test_load_global_local_flag_config | 1 | {
"docstring": "Test that CLI flags have higher priority than both\n ~/.streamlit/config.toml and $CWD/.streamlit/config.toml at parse time.\n \n [theme]\n base = \"dark\"\n font = \"sans serif\"\n textColor = \"#FFFFFF\"\n \n [theme]\n base = \"light\"\n... | 26 | 31 | Report sharing removal (#4260)
The report sharing feature is a substantial but completely unused portion of the code in Streamlit's underlying machinery. The feature was created early on, used by just a few groups, and has not been used by anyone for a while, as indicated by no activity in the associated S3 buckets. T... | 292 | 58 | def test_load_global_local_flag_config(self):
global_config =
local_config =
global_config_path = "/mock/home/folder/.streamlit/config.toml"
local_config_path = os.path.join(os.getcwd(), ".streamlit/config.toml")
global_open = mock_open(read_data=global_config)
... | 70 | 163 | 118,684 | |
Python | test_pickle.py | 864729813a0203af8bb0d30b6c883588ae2c96f8 | 114 | 39,808 | 12 | pandas/tests/io/test_pickle.py | pandas | https://github.com/pandas-dev/pandas.git | 0 | test_pickle_binary_object_compression | 2 | {
"docstring": "\n Read/write from binary file-objects w/wo compression.\n\n GH 26237, GH 29054, and GH 29570\n ",
"language": "en",
"n_whitespaces": 23,
"n_words": 13,
"vocab_size": 11
} | 20 | 12 | ENH: add support for reading .tar archives (#44787)
* Add reproduction test for .tar.gz archives
co-authored-by: Margarete Dippel <[email protected]>
* add support for .tar archives
python's `tarfile` supports gzip, xz and bz2 encoding,
so we don't need to make any special cases for that.
... | 188 | 44 | def test_pickle_binary_object_compression(compression):
df = tm.makeDataFrame()
# reference for compression
with tm.ensure_clean() as path:
df.to_pickle(path, compression=compression)
reference = Path(path).read_bytes()
# write
buffer = io.BytesIO()
df.to_pickle(buffer, co... | 57 | 109 | 166,376 | |
Python | views.py | de3fcba9e95818e9634ab7de6bfcb1f4221f2775 | 265 | 15,592 | 15 | wagtail/contrib/forms/views.py | wagtail | https://github.com/wagtail/wagtail.git | 0 | get_filtering | 5 | {
"docstring": " Return filering as a dict for submissions queryset ",
"language": "en",
"n_whitespaces": 9,
"n_words": 8,
"vocab_size": 8
} | 15 | 15 | Fix warnings from flake8-comprehensions. | 174 | 42 | def get_filtering(self):
self.select_date_form = SelectDateForm(self.request.GET)
result = {}
if self.select_date_form.is_valid():
date_from = self.select_date_form.cleaned_data.get('date_from')
date_to = self.select_date_form.cleaned_data.get('date_to')
... | 58 | 100 | 70,980 | |
Python | get_item_details.py | 494bd9ef78313436f0424b918f200dab8fc7c20b | 25 | 14,622 | 15 | erpnext/stock/get_item_details.py | erpnext | https://github.com/frappe/erpnext.git | 0 | get_so_reservation_for_item | 9 | {
"docstring": "select sales_order from `tabSales Invoice Item` where\n\t\tparent=%s and item_code=%s",
"language": "en",
"n_whitespaces": 8,
"n_words": 10,
"vocab_size": 10
} | 9 | 18 | style: format code with black | 253 | 26 | def get_so_reservation_for_item(args):
reserved_so = None
if args.get("against_sales_order"):
if get_reserved_qty_for_so(args.get("against_sales_order"), args.get("item_code")):
reserved_so = args.get("against_sales_order")
elif args.get("against_sales_invoice"):
sales_order = frappe.db.sql(
,
(args.get... | 42 | 146 | 67,799 | |
Python | test_macaroons.py | fe1daad67237c2154a3d8d8cdf6c603f0d33682e | 240 | 72,365 | 11 | tests/util/test_macaroons.py | synapse | https://github.com/matrix-org/synapse.git | 0 | test_short_term_login_token | 1 | {
"docstring": "Test the generation and verification of short-term login tokens",
"language": "en",
"n_whitespaces": 8,
"n_words": 9,
"vocab_size": 9
} | 17 | 18 | Move the "email unsubscribe" resource, refactor the macaroon generator & simplify the access token verification logic. (#12986)
This simplifies the access token verification logic by removing the `rights`
parameter which was only ever used for the unsubscribe link in email
notifications. The latter has been moved un... | 233 | 39 | def test_short_term_login_token(self):
token = self.macaroon_generator.generate_short_term_login_token(
user_id="@user:tesths",
auth_provider_id="oidc",
auth_provider_session_id="sid",
duration_in_ms=2 * 60 * 1000,
)
info = self.macaroon_... | 55 | 135 | 248,585 | |
Python | vader.py | 74bb3c28ce9f2cd2be4cd9176747d59a0d67285d | 274 | 7,656 | 15 | nltk/sentiment/vader.py | nltk | https://github.com/nltk/nltk.git | 0 | polarity_scores | 6 | {
"docstring": "\n Return a float for sentiment strength based on the input text.\n Positive values are positive valence, negative value are negative\n valence.\n\n :note: Hashtags are not taken into consideration (e.g. #BAD is neutral). If you\n are interested in processing the... | 21 | 19 | Add a note stating that a hashtag is unsupported in VADER | 218 | 53 | def polarity_scores(self, text):
# text, words_and_emoticons, is_cap_diff = self.preprocess(text)
sentitext = SentiText(
text, self.constants.PUNC_LIST, self.constants.REGEX_REMOVE_PUNCTUATION
)
sentiments = []
words_and_emoticons = sentitext.words_and_emotic... | 70 | 138 | 42,601 | |
Python | test_axes3d.py | 7a1df7830f7685a99291d90c5e79bfc5e7876f31 | 150 | 24,166 | 10 | lib/mpl_toolkits/mplot3d/tests/test_axes3d.py | matplotlib | https://github.com/matplotlib/matplotlib.git | 0 | test_mutating_input_arrays_y_and_z | 1 | {
"docstring": "\n Test to see if the `z` axis does not get mutated\n after a call to `Axes3D.plot`\n\n test cases came from GH#8990\n ",
"language": "en",
"n_whitespaces": 34,
"n_words": 21,
"vocab_size": 20
} | 14 | 19 | Test that plot results aren't affected by mutating input arrays | 277 | 46 | def test_mutating_input_arrays_y_and_z(fig_test, fig_ref):
ax1 = fig_test.add_subplot(111, projection='3d')
x = [1, 2, 3]
y = [0.0, 0.0, 0.0]
z = [0.0, 0.0, 0.0]
ax1.plot(x, y, z, 'o-')
ax1.set_ylim([0, 4])
ax1.set_zlim([0, 4])
fig_test.draw_without_rendering()
# mutate y,z to... | 87 | 208 | 110,450 | |
Python | views.py | bb26f96665567325a7fbb810249820e7dac0322a | 47 | 8,568 | 9 | airflow/www/views.py | airflow | https://github.com/apache/airflow.git | 0 | dag_edges | 4 | {
"docstring": "\n Create the list of edges needed to construct the Graph view.\n\n A special case is made if a TaskGroup is immediately upstream/downstream of another\n TaskGroup or task. Two dummy nodes named upstream_join_id and downstream_join_id are\n created for the TaskGroup. Instead of drawing an ... | 8 | 18 | Make Grid and and Graph view work with task mapping (#21740)
* Expand mapped tasks in the Scheduler
Technically this is done inside
DagRun.task_instance_scheduling_decisions, but the only place that is
currently called is the Scheduler
The way we are getting `upstream_ti` to pass to expand_mapped_task is
all ... | 48 | 22 | def dag_edges(dag):
# Edges to add between TaskGroup
edges_to_add = set()
# Edges to remove between individual tasks that are replaced by edges_to_add.
edges_to_skip = set()
task_group_map = dag.task_group.get_task_group_dict()
| 29 | 115 | 45,441 | |
Python | testing_utils.py | 57e6464ac9a31156f1c93e59107323e6ec01309e | 13 | 6,797 | 10 | src/transformers/testing_utils.py | transformers | https://github.com/huggingface/transformers.git | 0 | require_tokenizers | 1 | {
"docstring": "\n Decorator marking a test that requires 🤗 Tokenizers. These tests are skipped when 🤗 Tokenizers isn't installed.\n ",
"language": "en",
"n_whitespaces": 24,
"n_words": 17,
"vocab_size": 16
} | 5 | 2 | Update all require decorators to use skipUnless when possible (#16999) | 37 | 7 | def require_tokenizers(test_case):
return unittest.skipUnless(is_tokenizers_available(), "test requires tokenizers")(test_case)
| 7 | 20 | 37,492 | |
Python | hitachi_dhw.py | 1c0f9cf941f77d6e3d299f98d5174f0a2953f236 | 94 | 90,732 | 9 | homeassistant/components/overkiz/water_heater_entities/hitachi_dhw.py | core | https://github.com/home-assistant/core.git | 0 | current_operation | 5 | {
"docstring": "Return current operation ie. eco, electric, performance, ...",
"language": "en",
"n_whitespaces": 7,
"n_words": 8,
"vocab_size": 8
} | 15 | 9 | Add Overkiz Hitachi DHW (#81536)
* Port ha-tahome hitachi dhw
* Use int for setting temperature
* Use value as float when possible
* Use device state for current operation
* Update homeassistant/components/overkiz/water_heater_entities/hitachi_dhw.py
Co-authored-by: Quentame <[email protected]>
... | 102 | 23 | def current_operation(self) -> str | None:
modbus_control = self.device.states[OverkizState.MODBUS_CONTROL_DHW]
if modbus_control and modbus_control.value_as_str == OverkizCommandParam.STOP:
return STATE_OFF
current_mode = self.device.states[OverkizState.MODBUS_DHW_MODE]
... | 30 | 65 | 291,628 | |
Python | wordnet.py | 8ffd0d8190552d45f8b92e18da3fc41639e5185d | 210 | 7,546 | 14 | nltk/corpus/reader/wordnet.py | nltk | https://github.com/nltk/nltk.git | 0 | add_provs | 4 | {
"docstring": "Add languages from Multilingual Wordnet to the provenance dictionary",
"language": "en",
"n_whitespaces": 8,
"n_words": 9,
"vocab_size": 9
} | 16 | 10 | Initialize empty provenance for default English | 150 | 41 | def add_provs(self, reader):
fileids = reader.fileids()
for fileid in fileids:
prov, langfile = os.path.split(fileid)
file_name, file_extension = os.path.splitext(langfile)
if file_extension == ".tab":
lang = file_name.split("-")[-1]
... | 54 | 84 | 42,453 | |
Python | unix.py | f3166e673fe8d40277b804d35d77dcdb760fc3b3 | 73 | 3,281 | 11 | pipenv/patched/notpip/_vendor/platformdirs/unix.py | pipenv | https://github.com/pypa/pipenv.git | 0 | site_data_dir | 2 | {
"docstring": "\n :return: data directories shared by users (if `multipath <platformdirs.api.PlatformDirsABC.multipath>` is\n enabled and ``XDG_DATA_DIR`` is set and a multi path the response is also a multi path separated by the OS\n path separator), e.g. ``/usr/local/share/$appname/$version`... | 10 | 10 | check point progress on only bringing in pip==22.0.4 (#4966)
* vendor in pip==22.0.4
* updating vendor packaging version
* update pipdeptree to fix pipenv graph with new version of pip.
* Vendoring of pip-shims 0.7.0
* Vendoring of requirementslib 1.6.3
* Update pip index safety restrictions patch for p... | 78 | 24 | def site_data_dir(self) -> str:
# XDG default for $XDG_DATA_DIRS; only first, if multipath is False
path = os.environ.get("XDG_DATA_DIRS", "")
if not path.strip():
path = f"/usr/local/share{os.pathsep}/usr/share"
return self._with_multi_path(path)
| 27 | 39 | 20,229 | |
Python | stochastic_process_types.py | 7fe8e027ae1d7f683243c0229b961671a6cbb4c5 | 69 | 48,618 | 8 | sympy/stats/stochastic_process_types.py | sympy | https://github.com/sympy/sympy.git | 0 | absorbing_probabilities | 3 | {
"docstring": "\n Computes the absorbing probabilities, i.e.\n the ij-th entry of the matrix denotes the\n probability of Markov chain being absorbed\n in state j starting from state i.\n ",
"language": "en",
"n_whitespaces": 62,
"n_words": 26,
"vocab_size": 21
} | 7 | 6 | Improved some documentation in the stats module | 67 | 17 | def absorbing_probabilities(self):
_, _, R, _ = self.decompose()
N = self.fundamental_matrix()
if R is None or N is None:
return None
return N*R
| 23 | 41 | 197,540 | |
Python | util.py | f3166e673fe8d40277b804d35d77dcdb760fc3b3 | 43 | 3,407 | 11 | pipenv/patched/notpip/_vendor/pygments/util.py | pipenv | https://github.com/pypa/pipenv.git | 0 | doctype_matches | 2 | {
"docstring": "Check if the doctype matches a regular expression (if present).\n\n Note that this method only checks the first part of a DOCTYPE.\n eg: 'html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\"'\n ",
"language": "en",
"n_whitespaces": 38,
"n_words": 29,
"vocab_size": 27
} | 13 | 6 | check point progress on only bringing in pip==22.0.4 (#4966)
* vendor in pip==22.0.4
* updating vendor packaging version
* update pipdeptree to fix pipenv graph with new version of pip.
* Vendoring of pip-shims 0.7.0
* Vendoring of requirementslib 1.6.3
* Update pip index safety restrictions patch for p... | 87 | 17 | def doctype_matches(text, regex):
m = doctype_lookup_re.search(text)
if m is None:
return False
doctype = m.group(1)
return re.compile(regex, re.I).match(doctype.strip()) is not None
| 21 | 54 | 20,520 | |
Python | test_data_export.py | 096b5511e244eecd8799b2a0324655207ce8985e | 166 | 19,764 | 12 | tests/sentry/data_export/endpoints/test_data_export.py | sentry | https://github.com/getsentry/sentry.git | 0 | test_converts_stats_period_start_end | 1 | {
"docstring": "\n Ensures that statsPeriodStart and statsPeriodEnd is converted to start/end.\n ",
"language": "en",
"n_whitespaces": 24,
"n_words": 9,
"vocab_size": 9
} | 18 | 15 | ref(tests): Remove `get_valid_response()` (#34822) | 205 | 32 | def test_converts_stats_period_start_end(self):
payload = self.make_payload("discover", {"statsPeriodStart": "1w", "statsPeriodEnd": "5d"})
with self.feature("organizations:discover-query"):
response = self.get_success_response(self.org.slug, status_code=201, **payload)
data... | 49 | 114 | 100,170 | |
Python | admin.py | 9c19aff7c7561e3a82978a272ecdaad40dda5c00 | 155 | 50,467 | 11 | django/contrib/auth/admin.py | django | https://github.com/django/django.git | 0 | response_add | 3 | {
"docstring": "\n Determine the HttpResponse for the add_view stage. It mostly defers to\n its superclass implementation but is customized because the User model\n has a slightly different workflow.\n ",
"language": "en",
"n_whitespaces": 55,
"n_words": 26,
"vocab_size": 24
} | 9 | 5 | Refs #33476 -- Reformatted code with Black. | 102 | 52 | def response_add(self, request, obj, post_url_continue=None):
# We should allow further modification of the user just added i.e. the
# 'Save' button should behave like the 'Save and continue editing'
# button except in two scenarios:
# * The user has pressed the 'Save and add an... | 77 | 61 | 203,593 | |
Python | test_temperature_format.py | b0ed42a5a58976ebe82b5bbbb60c499648a1718b | 18 | 95,665 | 9 | tests/components/daikin/test_temperature_format.py | core | https://github.com/home-assistant/core.git | 0 | test_decimal_conversion_more_digits | 1 | {
"docstring": "Check at most 1 decimal is kept when target temp is a decimal with more than 1 decimal.",
"language": "en",
"n_whitespaces": 17,
"n_words": 18,
"vocab_size": 15
} | 3 | 3 | Fix #69952: Daikin AC Temperature jumps after being set (#70326) | 32 | 8 | def test_decimal_conversion_more_digits():
formatted = format_target_temperature("16.09")
assert formatted == "16.1"
| 9 | 15 | 296,690 | |
Python | workflows.py | 60ba39ffb5ec6d760efa6e2ecbff7ede53b12464 | 75 | 15,531 | 13 | wagtail/admin/views/workflows.py | wagtail | https://github.com/wagtail/wagtail.git | 0 | get_task_chosen_response | 1 | {
"docstring": "\n helper function: given a task, return the response indicating that it has been chosen\n ",
"language": "en",
"n_whitespaces": 21,
"n_words": 14,
"vocab_size": 14
} | 10 | 10 | replace get_task_result_data helper with more useful one get_task_chosen_response | 103 | 23 | def get_task_chosen_response(request, task):
result_data = {
'id': task.id,
'name': task.name,
'edit_url': reverse('wagtailadmin_workflows:edit_task', args=[task.id]),
}
return render_modal_workflow(
request, None, None,
None, json_data={'step': 'task_chosen', 'r... | 25 | 62 | 70,608 | |
Python | matching.py | c3e1e7f4c6a4edb968494cd4775574ad26f2a96b | 163 | 41,940 | 11 | networkx/algorithms/matching.py | @not_implemented_for("multigraph")
@not_implemented_for("directed") | networkx | https://github.com/networkx/networkx.git | 1 | min_weight_matching | 5 | {
"docstring": "Computing a minimum-weight maximal matching of G.\n\n Use the maximum-weight algorithm with edge weights subtracted\n from the maximum weight of all edges.\n\n A matching is a subset of edges in which no node occurs more than once.\n The weight of a matching is the sum of the weights of it... | 22 | 15 | Fix min_weight_matching to convert edge weights without reciprocal (#5394)
* Add test and then fix code and docs
* Correct and improve docs. Change 1e-6 to 1 to maintain integers.
Include argument in docstring for why adding the 1 doesn't impact the min | 231 | 65 | def min_weight_matching(G, maxcardinality=None, weight="weight"):
if maxcardinality not in (True, None):
raise nx.NetworkXError(
"The argument maxcardinality does not make sense "
"in the context of minimum weight matchings."
"It is deprecated and will be removed in ... | 84 | 137 | 176,513 |
Python | proxmox.py | a5679caf65c7c79cd72841b6e5793b9b693744c9 | 254 | 54,362 | 15 | salt/cloud/clouds/proxmox.py | salt | https://github.com/saltstack/salt.git | 0 | _find_agent_ip | 8 | {
"docstring": "\n If VM is started we would return the IP-addresses that are returned by the qemu agent on the VM.\n ",
"language": "en",
"n_whitespaces": 26,
"n_words": 19,
"vocab_size": 17
} | 19 | 19 | Add support for get IP-address from agent | 231 | 78 | def _find_agent_ip(vm_, vmid):
# This functionality is only available on qemu
if not vm_.get("technology") == "qemu":
log.warning("Find agent IP is only available under `qemu`")
return
# Create an empty list of IP-addresses:
ips = []
endpoint = "nodes/{}/qemu/{}/agent/network... | 106 | 128 | 216,056 | |
Python | test_get_name.py | 3bb43882e727b1d36abe2e501759c9c5e9048ecf | 43 | 54,131 | 12 | tests/pytests/unit/utils/win_dacl/test_get_name.py | salt | https://github.com/saltstack/salt.git | 0 | test_get_name_error | 1 | {
"docstring": "\n Test get_name with an un mapped SID, should throw a CommandExecutionError\n ",
"language": "en",
"n_whitespaces": 18,
"n_words": 11,
"vocab_size": 11
} | 16 | 6 | Add tests, migrate some tests to pytest | 87 | 20 | def test_get_name_error():
test_sid = "S-1-2-3-4"
sid_obj = win32security.ConvertStringSidToSid(test_sid)
with pytest.raises(salt.exceptions.CommandExecutionError) as exc:
salt.utils.win_dacl.get_name(sid_obj)
assert "No mapping between account names" in exc.value.message
| 21 | 48 | 215,737 | |
Python | introspection.py | 9c19aff7c7561e3a82978a272ecdaad40dda5c00 | 93 | 51,029 | 12 | django/db/backends/sqlite3/introspection.py | django | https://github.com/django/django.git | 0 | get_relations | 2 | {
"docstring": "\n Return a dictionary of {column_name: (ref_column_name, ref_table_name)}\n representing all foreign keys in the given table.\n ",
"language": "en",
"n_whitespaces": 37,
"n_words": 15,
"vocab_size": 15
} | 13 | 8 | Refs #33476 -- Reformatted code with Black. | 85 | 24 | def get_relations(self, cursor, table_name):
cursor.execute(
"PRAGMA foreign_key_list(%s)" % self.connection.ops.quote_name(table_name)
)
return {
column_name: (ref_column_name, ref_table_name)
for _, _, ref_table_name, column_name, ref_column_name, *... | 25 | 56 | 205,195 | |
Python | background_updates.py | 26211fec24d8d0a967de33147e148166359ec8cb | 158 | 71,674 | 12 | synapse/storage/background_updates.py | synapse | https://github.com/matrix-org/synapse.git | 0 | average_items_per_ms | 3 | {
"docstring": "An estimate of how long it takes to do a single update.\n Returns:\n A duration in ms as a float\n ",
"language": "en",
"n_whitespaces": 45,
"n_words": 20,
"vocab_size": 19
} | 7 | 11 | Fix a bug in background updates wherein background updates are never run using the default batch size (#12157) | 78 | 47 | def average_items_per_ms(self) -> Optional[float]:
# We want to return None if this is the first background update item
if self.total_item_count == 0:
return None
# Avoid dividing by zero
elif self.avg_duration_ms == 0:
return 0
else:
... | 61 | 45 | 247,442 | |
Python | media_player.py | 26251895295d74fcd2c73e37804c23675c433247 | 37 | 106,437 | 7 | homeassistant/components/forked_daapd/media_player.py | async def _pause_and_wait_for_callback(self):
"""Send pause and wait for the pause callback to be received."""
self._pause_requested = True
await self.async_media_pause()
try: | core | https://github.com/home-assistant/core.git | 1 | _pause_and_wait_for_callback | 2 | {
"docstring": "Send pause and wait for the pause callback to be received.",
"language": "en",
"n_whitespaces": 10,
"n_words": 11,
"vocab_size": 10
} | 4 | 9 | Use async_timeout in forked_daapd (#78451) | 34 | 9 | async def _pause_and_wait_for_callback(self):
self._pause_requested = True
await self.async_media_pause()
try: | 9 | 53 | 307,669 |
Python | PySimpleGUI.py | 07bb93d47f01468660a01f42150e87e5cb08d546 | 10,839 | 53,473 | 16 | PySimpleGUI.py | PySimpleGUI | https://github.com/PySimpleGUI/PySimpleGUI.git | 0 | set_options | 1 | {
"docstring": "\n :param icon: Can be either a filename or Base64 value. For Windows if filename, it MUST be ICO format. For Linux, must NOT be ICO. Most portable is to use a Base64 of a PNG file. This works universally across all OS's\n :type icon: bytes ... | 131 | 14 | Addition of tooltip_offset parm to set_options call (major hack to get around 8.6.12 problem). Backed out the experiments to try and fix new problem with Ubuntu | 19,192 | 1,112 | def set_options(icon=None, button_color=None, element_size=(None, None), button_element_size=(None, None),
margins=(None, None),
element_padding=(None, None), auto_size_text=None, auto_size_buttons=None, font=None, border_width=None,
slider_border_width=None, slider_relie... | 4,824 | 255 | 212,865 | |
Python | test_analytics.py | 46500beefcccd8106718a8172a5078bbe5579765 | 85 | 95,933 | 16 | tests/components/analytics/test_analytics.py | core | https://github.com/home-assistant/core.git | 0 | test_load_with_supervisor_without_diagnostics | 1 | {
"docstring": "Test loading with a supervisor that has not diagnostics enabled.",
"language": "en",
"n_whitespaces": 9,
"n_words": 10,
"vocab_size": 10
} | 12 | 13 | Enable strict typing of analytics (#83119) | 132 | 22 | async def test_load_with_supervisor_without_diagnostics(hass):
analytics = Analytics(hass)
analytics._data.preferences[ATTR_DIAGNOSTICS] = True
assert analytics.preferences[ATTR_DIAGNOSTICS]
with patch(
"homeassistant.components.hassio.get_supervisor_info",
side_effect=Mock(return... | 26 | 78 | 296,961 | |
Python | test_message_builder.py | db35e231ceababe8c9f5ca7b5d2ca685f07c7d5b | 581 | 18,974 | 11 | tests/sentry/integrations/msteams/test_message_builder.py | sentry | https://github.com/getsentry/sentry.git | 0 | test_issue_message_builder | 3 | {
"docstring": "\\{\\{ # {{\n DATE\\( # DATE(\n [0-9T+:\\-]+,\\ SHORT # 2022-07-14T19:30:34, SHORT\n \\) # )\n \\}\\} # }}\n \\ # whitespace\n ... | 41 | 60 | test(msteams): Add tests for building group card (#36834)
Add tests for build_group_card which builds issues cards. Does NOT test all visual aspects of the card. Only ensures that certain important elements are present and the basic structure of the card is correct. | 694 | 110 | def test_issue_message_builder(self):
self.event1.data["metadata"].update({"value": "some error"})
self.group1.data["metadata"].update({"value": "some error"})
self.event1.data["type"] = self.group1.data["type"] = "error"
issue_card = build_group_card(
group=self.group1, eve... | 176 | 402 | 93,204 | |
Python | cached.py | 9c19aff7c7561e3a82978a272ecdaad40dda5c00 | 127 | 51,474 | 12 | django/template/loaders/cached.py | django | https://github.com/django/django.git | 0 | cache_key | 7 | {
"docstring": "\n Generate a cache key for the template name and skip.\n\n If skip is provided, only origins that match template_name are included\n in the cache key. This ensures each template is only parsed and cached\n once if contained in different extend chains like:\n\n x... | 12 | 9 | Refs #33476 -- Reformatted code with Black. | 106 | 28 | def cache_key(self, template_name, skip=None):
skip_prefix = ""
if skip:
matching = [
origin.name for origin in skip if origin.template_name == template_name
]
if matching:
skip_prefix = self.generate_hash(matching)
r... | 36 | 66 | 206,296 | |
Python | PythonFlavors.py | 77e7c06c0f9c5c0735b5a65c72abcd243d8e3640 | 47 | 42,801 | 11 | nuitka/PythonFlavors.py | Nuitka | https://github.com/Nuitka/Nuitka.git | 0 | isNuitkaPython | 2 | {
"docstring": "Is this our own fork of CPython named Nuitka-Python.",
"language": "en",
"n_whitespaces": 8,
"n_words": 9,
"vocab_size": 9
} | 7 | 5 | Minor cleanups | 59 | 19 | def isNuitkaPython():
# spell-checker: ignore nuitkapython
if python_version >= 0x300:
return sys.implementation.name == "nuitkapython"
else:
return sys.subversion[0] == "nuitkapython"
_is_anaconda = None
| 22 | 29 | 178,712 | |
Python | req_command.py | f638f5d0e6c8ebed0e69a6584bc7f003ec646580 | 158 | 12,206 | 12 | .venv/lib/python3.8/site-packages/pip/_internal/cli/req_command.py | transferlearning | https://github.com/jindongwang/transferlearning.git | 0 | handle_pip_version_check | 3 | {
"docstring": "\n Do the pip version check if not disabled.\n\n This overrides the default behavior of not doing the check.\n ",
"language": "en",
"n_whitespaces": 40,
"n_words": 18,
"vocab_size": 15
} | 17 | 9 | upd; format | 117 | 50 | def handle_pip_version_check(self, options):
# type: (Values) -> None
# Make sure the index_group options are present.
assert hasattr(options, "no_index")
if options.disable_pip_version_check or options.no_index:
return
# Otherwise, check if we're using the... | 55 | 57 | 60,555 | |
Python | admin_modify.py | 9c19aff7c7561e3a82978a272ecdaad40dda5c00 | 457 | 50,412 | 15 | django/contrib/admin/templatetags/admin_modify.py | @register.tag(name="submit_row") | django | https://github.com/django/django.git | 1 | submit_row | 22 | {
"docstring": "\n Display the row of buttons for delete and save.\n ",
"language": "en",
"n_whitespaces": 16,
"n_words": 9,
"vocab_size": 9
} | 24 | 49 | Refs #33476 -- Reformatted code with Black. | 380 | 64 | def submit_row(context):
add = context["add"]
change = context["change"]
is_popup = context["is_popup"]
save_as = context["save_as"]
show_save = context.get("show_save", True)
show_save_and_add_another = context.get("show_save_and_add_another", True)
show_save_and_continue = context.get... | 131 | 213 | 203,500 |
Python | test_ridge.py | f14af688b7e77ecb6df9dfee93ec39b6c0334b86 | 60 | 75,551 | 11 | sklearn/linear_model/tests/test_ridge.py | scikit-learn | https://github.com/scikit-learn/scikit-learn.git | 0 | test_ridgecv_normalize_deprecated | 1 | {
"docstring": "Check that the normalize deprecation warning mentions the rescaling of alphas\n\n Non-regression test for issue #22540\n ",
"language": "en",
"n_whitespaces": 22,
"n_words": 16,
"vocab_size": 15
} | 13 | 8 | FIX Make Ridge*CV warn about rescaling alphas with scaling (#22585) | 108 | 26 | def test_ridgecv_normalize_deprecated(Estimator):
X = np.array([[1, -1], [1, 1]])
y = np.array([0, 1])
estimator = Estimator(normalize=True)
with pytest.warns(
FutureWarning, match=r"Set parameter alphas to: original_alphas \* n_samples"
):
estimator.fit(X, y)
| 28 | 68 | 259,066 | |
Python | __init__.py | 10027b20904b678d8baecbc6e72c5bcc3f4f24b2 | 29 | 107,548 | 10 | homeassistant/components/nissan_leaf/__init__.py | core | https://github.com/home-assistant/core.git | 0 | async_start_charging | 1 | {
"docstring": "Request to start charging the car. Used by the button platform.",
"language": "en",
"n_whitespaces": 10,
"n_words": 11,
"vocab_size": 10
} | 7 | 4 | Add button to start leaf charge (#62948)
Co-authored-by: Bruce Duncan <[email protected]> | 47 | 8 | async def async_start_charging(self) -> None:
await self.hass.async_add_executor_job(self.leaf.start_charging)
self.schedule_update()
| 8 | 26 | 308,815 | |
Python | binary_sensor.py | 0c767bd0d37a41af37728b1d8b4eae8dceb7e188 | 31 | 105,270 | 9 | homeassistant/components/smappee/binary_sensor.py | core | https://github.com/home-assistant/core.git | 0 | async_update | 1 | {
"docstring": "Get the latest data from Smappee and update the state.",
"language": "en",
"n_whitespaces": 9,
"n_words": 10,
"vocab_size": 9
} | 6 | 4 | Improve entity type hints [s] (part 1/2) (#77881) | 45 | 10 | async def async_update(self) -> None:
await self._smappee_base.async_update()
self._state = self._service_location.is_present
| 10 | 25 | 306,486 | |
Python | test_models.py | 103729c0855aad2f45fcaa2cf680799236f3e201 | 137 | 77,999 | 11 | netbox/dcim/tests/test_models.py | netbox | https://github.com/netbox-community/netbox.git | 0 | test_mount_half_u_devices | 1 | {
"docstring": "\n Check that two 0.5U devices can be mounted in the same rack unit.\n ",
"language": "en",
"n_whitespaces": 28,
"n_words": 13,
"vocab_size": 13
} | 21 | 12 | Add test for 0.5U devices | 196 | 30 | def test_mount_half_u_devices(self):
rack = Rack.objects.first()
attrs = {
'device_type': DeviceType.objects.get(u_height=0.5),
'device_role': DeviceRole.objects.first(),
'site': Site.objects.first(),
'rack': rack,
'face': DeviceFaceCh... | 33 | 121 | 265,126 | |
Python | unittest.py | 922b771337f6d14a556fa761c783748f698e924b | 30 | 72,530 | 8 | tests/unittest.py | synapse | https://github.com/matrix-org/synapse.git | 0 | make_homeserver | 1 | {
"docstring": "\n Make and return a homeserver.\n\n Args:\n reactor: A Twisted Reactor, or something that pretends to be one.\n clock (synapse.util.Clock): The Clock, associated with the reactor.\n\n Returns:\n A homeserver suitable for testing.\n\n Functi... | 6 | 3 | Add missing type hints for tests.unittest. (#13397) | 32 | 8 | def make_homeserver(self, reactor, clock):
hs = self.setup_test_homeserver()
return hs
| 9 | 19 | 248,955 | |
Python | query.py | 9c19aff7c7561e3a82978a272ecdaad40dda5c00 | 177 | 51,188 | 13 | django/db/models/query.py | django | https://github.com/django/django.git | 0 | ordered | 7 | {
"docstring": "\n Return True if the QuerySet is ordered -- i.e. has an order_by()\n clause or a default ordering on the model (or is empty).\n ",
"language": "en",
"n_whitespaces": 45,
"n_words": 23,
"vocab_size": 21
} | 11 | 14 | Refs #33476 -- Reformatted code with Black. | 103 | 29 | def ordered(self):
if isinstance(self, EmptyQuerySet):
return True
if self.query.extra_order_by or self.query.order_by:
return True
elif (
self.query.default_ordering
and self.query.get_meta().ordering
and
# A defau... | 36 | 63 | 205,746 | |
Python | multiple_upload.py | d10f15e55806c6944827d801cd9c2d53f5da4186 | 53 | 15,887 | 11 | wagtail/admin/views/generic/multiple_upload.py | wagtail | https://github.com/wagtail/wagtail.git | 0 | get_invalid_response_data | 1 | {
"docstring": "\n Return the JSON response data for an invalid form submission\n ",
"language": "en",
"n_whitespaces": 25,
"n_words": 10,
"vocab_size": 10
} | 5 | 5 | Reformat with black | 54 | 10 | def get_invalid_response_data(self, form):
return {
"success": False,
"error_message": "\n".join(form.errors["file"]),
}
| 10 | 29 | 72,414 | |
Python | cables.py | 6ff2e55ce408f0f7f2fe99129048421c25ecafe6 | 43 | 77,911 | 10 | netbox/dcim/models/cables.py | netbox | https://github.com/netbox-community/netbox.git | 0 | path_objects | 2 | {
"docstring": "\n Cache and return the complete path as lists of objects, derived from their annotation within the path.\n ",
"language": "en",
"n_whitespaces": 32,
"n_words": 17,
"vocab_size": 16
} | 5 | 4 | Add origins, destinations properties on CablePath | 50 | 10 | def path_objects(self):
if not hasattr(self, '_path_objects'):
self._path_objects = self._get_path()
return self._path_objects
| 11 | 28 | 264,915 | |
Python | backend_wx.py | e1eca0aa8bf0b51009e012cd37d3e95f364d0ee9 | 448 | 22,898 | 13 | lib/matplotlib/backends/backend_wx.py | matplotlib | https://github.com/matplotlib/matplotlib.git | 0 | gui_repaint | 7 | {
"docstring": "\n Update the displayed image on the GUI canvas, using the supplied\n wx.PaintDC device context.\n\n The 'WXAgg' backend sets origin accordingly.\n ",
"language": "en",
"n_whitespaces": 49,
"n_words": 20,
"vocab_size": 18
} | 31 | 17 | Expire deprecations in backends | 350 | 121 | def gui_repaint(self, drawDC=None):
_log.debug("%s - gui_repaint()", type(self))
# The "if self" check avoids a "wrapped C/C++ object has been deleted"
# RuntimeError if doing things after window is closed.
if not (self and self.IsShownOnScreen()):
return
if ... | 155 | 175 | 107,757 | |
Python | utils.py | 540bba4544d9f31c126571cc1a45a6783b3b6a89 | 138 | 78,322 | 13 | netbox/utilities/utils.py | netbox | https://github.com/netbox-community/netbox.git | 0 | copy_safe_request | 4 | {
"docstring": "\n Copy selected attributes from a request object into a new fake request object. This is needed in places where\n thread safe pickling of the useful request data is needed.\n ",
"language": "en",
"n_whitespaces": 39,
"n_words": 29,
"vocab_size": 25
} | 16 | 16 | Closes #10920: Include request cookies when queuing a custom script | 158 | 43 | def copy_safe_request(request):
meta = {
k: request.META[k]
for k in HTTP_REQUEST_META_SAFE_COPY
if k in request.META and isinstance(request.META[k], str)
}
return NetBoxFakeRequest({
'META': meta,
'COOKIES': request.COOKIES,
'POST': request.POST,
... | 45 | 97 | 266,161 | |
Python | test_product_filtering_and_sorting_with_channels.py | 3f773c3890aead936949bd6923d2d7f669e1c68f | 2,732 | 4,954 | 18 | saleor/graphql/product/tests/test_product_filtering_and_sorting_with_channels.py | @pytest.mark.parametrize(
"sort_by",
[
{"field": "PUBLISHED", "direction": "ASC"},
{"field": "PRICE", "direction": "DESC"},
{"field": "MINIMAL_PRICE", "direction": "DESC"},
{"field": "PUBLICATION_DATE", "direction": "DESC"},
],
) | saleor | https://github.com/saleor/saleor.git | 1 | products_for_sorting_with_channels | 1 | {
"docstring": "\n query ($sortBy: ProductOrder, $filter: ProductFilterInput, $channel: String){\n products (\n first: 10, sortBy: $sortBy, filter: $filter, channel: $channel\n ) {\n edges {\n node {\n name\n slug\n ... | 45 | 196 | Add sorting by LAST_MODIFIED_AT field to GraphQL schema (#9245)
* Add sorting by LAST_MODIFIED_AT to new types
* Add LAST_MODIFIED_AT to sorting exported files
* Update schema, fix variant sorter
* Update changelog
* Rebase and update changelog
Co-authored-by: Marcin Gębala <5421321+maarcingebala@users.... | 1,552 | 105 | def products_for_sorting_with_channels(category, channel_USD, channel_PLN):
product_type = ProductType.objects.create(name="Apple", kind=ProductTypeKind.NORMAL)
products = Product.objects.bulk_create(
[
Product(
name="Product1",
slug="prod1",
c... | 263 | 991 | 26,250 |
Python | test_build_ext.py | 8198943edd73a363c266633e1aa5b2a9e9c9f526 | 704 | 56,858 | 14 | python3.10.4/Lib/distutils/tests/test_build_ext.py | XX-Net | https://github.com/XX-net/XX-Net.git | 0 | _try_compile_deployment_target | 8 | {
"docstring": "\\\n #include <AvailabilityMacros.h>\n\n int dummy;\n\n #if TARGET %s MAC_OS_X_VERSION_MIN_REQUIRED\n #else\n #error \"Unexpected target\"\n #endif\n\n ",
"language": "en",
"n_whitespaces": 115,
... | 47 | 55 | add python 3.10.4 for windows | 500 | 129 | def _try_compile_deployment_target(self, operator, target):
orig_environ = os.environ
os.environ = orig_environ.copy()
self.addCleanup(setattr, os, 'environ', orig_environ)
if target is None:
if os.environ.get('MACOSX_DEPLOYMENT_TARGET'):
del os.environ['MACO... | 196 | 288 | 223,085 | |
Python | test_locking.py | 10dbbddaf35607e4257f50dd960520a1268dd225 | 216 | 17,037 | 11 | wagtail/snippets/tests/test_locking.py | wagtail | https://github.com/wagtail/wagtail.git | 0 | test_edit_post_locked_by_self | 1 | {
"docstring": "A user can edit a snippet that is locked by themselves.",
"language": "en",
"n_whitespaces": 10,
"n_words": 11,
"vocab_size": 11
} | 17 | 14 | Add tests for locking snippets | 142 | 45 | def test_edit_post_locked_by_self(self):
# Lock the snippet
self.lock_snippet(self.user)
# Try to edit the snippet
response = self.client.post(
self.get_url("edit"),
{"text": "Edited while locked"},
follow=True,
)
self.refresh... | 63 | 77 | 80,233 | |
Python | test_advanced_4.py | 7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065 | 381 | 29,488 | 12 | python/ray/tests/test_advanced_4.py | ray | https://github.com/ray-project/ray.git | 0 | test_jemalloc_env_var_propagate | 1 | {
"docstring": "Test `propagate_jemalloc_env_var`\n If the shared library path is not specified,\n it should return an empty dict.\n \n When the shared library is specified\n \n When the malloc config is specified\n ",
"language": "en",
"n_whitespaces": 51,
"n_words": 28,
"vocab_size": 20... | 20 | 57 | [CI] Format Python code with Black (#21975)
See #21316 and #21311 for the motivation behind these changes. | 420 | 52 | def test_jemalloc_env_var_propagate():
gcs_ptype = ray.ray_constants.PROCESS_TYPE_GCS_SERVER
expected = {}
actual = ray._private.services.propagate_jemalloc_env_var(
jemalloc_path="", jemalloc_conf="", jemalloc_comps=[], process_type=gcs_ptype
)
assert actual == expected
actual... | 114 | 258 | 131,233 | |
Python | models.py | 1636508a6ac8df6b93d0ea5c621c174f605fd47a | 37 | 77,785 | 13 | netbox/users/models.py | netbox | https://github.com/netbox-community/netbox.git | 0 | create_userconfig | 3 | {
"docstring": "\n Automatically create a new UserConfig when a new User is created. Skip this if importing a user from a fixture.\n ",
"language": "en",
"n_whitespaces": 27,
"n_words": 20,
"vocab_size": 16
} | 12 | 4 | Fixes #9156: Fix loading UserConfig data from fixtures | 71 | 18 | def create_userconfig(instance, created, raw=False, **kwargs):
if created and not raw:
config = get_config()
UserConfig(user=instance, data=config.DEFAULT_USER_PREFERENCES).save()
#
# REST API
#
| 20 | 42 | 264,682 | |
Python | pytables.py | 7d2f9b8d59908fbf57c6453bc41891efbfe981a6 | 181 | 39,982 | 12 | pandas/io/pytables.py | pandas | https://github.com/pandas-dev/pandas.git | 0 | validate_attr | 6 | {
"docstring": "validate that we have the same order as the existing & same dtype",
"language": "en",
"n_whitespaces": 12,
"n_words": 13,
"vocab_size": 11
} | 13 | 11 | TYP: some return annotations in pytables.py (#47512) | 124 | 34 | def validate_attr(self, append) -> None:
if append:
existing_fields = getattr(self.attrs, self.kind_attr, None)
if existing_fields is not None and existing_fields != list(self.values):
raise ValueError("appended items do not match existing items in table!")
... | 59 | 78 | 167,375 | |
Python | external.py | cb2713e7050f2783493736e43a6b704865ce61c5 | 165 | 43,144 | 12 | gradio/external.py | gradio | https://github.com/gradio-app/gradio.git | 0 | load_blocks_from_repo | 2 | {
"docstring": "Creates and returns a Blocks instance from several kinds of Hugging Face repos:\n 1) A model repo\n 2) A Spaces repo running Gradio 2.x\n 3) A Spaces repo running Gradio 3.x\n ",
"language": "en",
"n_whitespaces": 43,
"n_words": 31,
"vocab_size": 24
} | 17 | 15 | Getting Interface.load() working for 2.x and 3.x models and Spaces (#1361)
* version
* refactor for model and 2.x spaces
* fixing tests
* fixed tests
* getting there...
* formatting
* formatting
* fixes
* formatting
* external dependencies working
* formatting
* loading from 3.x
* c... | 167 | 61 | def load_blocks_from_repo(name, src=None, api_key=None, alias=None, **kwargs):
if src is None:
tokens = name.split(
"/"
) # Separate the source (e.g. "huggingface") from the repo name (e.g. "google/vit-base-patch16-224")
assert (
len(tokens) > 1
), "Eith... | 75 | 104 | 180,326 | |
Python | versioncontrol.py | f638f5d0e6c8ebed0e69a6584bc7f003ec646580 | 257 | 12,562 | 10 | .venv/lib/python3.8/site-packages/pip/_internal/vcs/versioncontrol.py | transferlearning | https://github.com/jindongwang/transferlearning.git | 0 | get_backend_for_dir | 4 | {
"docstring": "\n Return a VersionControl object if a repository of that type is found\n at the given directory.\n ",
"language": "en",
"n_whitespaces": 38,
"n_words": 16,
"vocab_size": 15
} | 16 | 13 | upd; format | 126 | 67 | def get_backend_for_dir(self, location):
# type: (str) -> Optional[VersionControl]
vcs_backends = {}
for vcs_backend in self._registry.values():
repo_path = vcs_backend.get_repository_root(location)
if not repo_path:
continue
logger.de... | 86 | 75 | 61,419 | |
Python | canvas.py | 3a7a82af9588629dad5807e0862bacbbd5d7a7f2 | 17 | 52,267 | 8 | celery/canvas.py | celery | https://github.com/celery/celery.git | 0 | _stamp_regen_task | 1 | {
"docstring": "When stamping a sequence of tasks created by a generator,\n we use this function to stamp each task in the generator\n without exhausting it.",
"language": "en",
"n_whitespaces": 29,
"n_words": 24,
"vocab_size": 23
} | 5 | 3 | Canvas.py doc enhancement (#7889)
* Enhanced doc for canvas.maybe_unroll_group()
* Enhanced doc for canvas._stamp_regen_task()
* Enhanced doc for canvas._merge_dictionaries() | 39 | 8 | def _stamp_regen_task(task, visitor, **headers):
task.stamp(visitor=visitor, **headers)
return task
| 8 | 24 | 208,258 | |
Python | context.py | 11e1a8a3fa8d13513fe926b731fb907a066af2a1 | 662 | 46,847 | 15 | src/pandas_profiling/visualisation/context.py | ydata-profiling | https://github.com/ydataai/ydata-profiling.git | 0 | manage_matplotlib_context | 2 | {
"docstring": "Return a context manager for temporarily changing matplotlib unit registries and rcParams.",
"language": "en",
"n_whitespaces": 11,
"n_words": 12,
"vocab_size": 12
} | 19 | 62 | fix: change context managed backend (#1149) | 503 | 139 | def manage_matplotlib_context() -> Any:
originalRcParams = matplotlib.rcParams.copy()
# Credits for this style go to the ggplot and seaborn packages.
# We copied the style file to remove dependencies on the Seaborn package.
# Check it out, it's an awesome library for plotting
customRcParam... | 184 | 273 | 191,835 | |
Python | test_gen_backend_stubs.py | bb5b4cceb6f737448eaaa6817cd773b6f4b0e77d | 32 | 21,490 | 8 | tools/test/test_gen_backend_stubs.py | pytorch | https://github.com/pytorch/pytorch.git | 0 | test_empty_backend | 1 | {
"docstring": "\\\nbackend:\ncpp_namespace: torch_xla\nsupported:\n- absYou must provide a value for \"backend\"",
"language": "en",
"n_whitespaces": 8,
"n_words": 13,
"vocab_size": 13
} | 6 | 8 | Revert "Revert D32498569: allow external backend codegen to toggle whether to generate out= and inplace kernels" (#69950)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69950
This reverts commit f6cad53443704dfe5a20cc62bee14d91e3bffcaa.
Test Plan: Imported from OSS
Reviewed By: albanD
Diff... | 47 | 10 | def test_empty_backend(self) -> None:
yaml_str =
output_error = self.get_errors_from_gen_backend_stubs(yaml_str)
self.assertExpectedInline(output_error, )
| 11 | 26 | 102,175 | |
Python | algorithm.py | e368dd9b4e10026767df66d1811a92bd8ca2d8f9 | 150 | 30,913 | 12 | rllib/algorithms/algorithm.py | ray | https://github.com/ray-project/ray.git | 0 | _run_offline_evaluation | 3 | {
"docstring": "Runs offline evaluation via `OfflineEvaluator.estimate_on_dataset()` API.\n\n This method will be used when `evaluation_dataset` is provided.\n Note: This will only work if the policy is a single agent policy.\n\n Returns:\n The results dict from the offline evaluation ... | 17 | 12 | [RLlib] By-pass Evaluation workers when doing OPE (#30135)
Signed-off-by: Kourosh Hakhamaneshi <[email protected]> | 121 | 26 | def _run_offline_evaluation(self):
assert len(self.workers.local_worker().policy_map) == 1
parallelism = self.evaluation_config.evaluation_num_workers or 1
offline_eval_results = {"off_policy_estimator": {}}
for evaluator_name, offline_evaluator in self.reward_estimators.items(... | 30 | 74 | 136,419 |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 7