1 | | name | file | line | type | comment |
---|
2 | 0 | UserInputError | tensorflow/configure.py | 74 | class | |
3 | 1 | is_windows | tensorflow/configure.py | 78 | function | |
4 | 2 | is_linux | tensorflow/configure.py | 82 | function | |
5 | 3 | is_macos | tensorflow/configure.py | 86 | function | |
6 | 4 | is_ppc64le | tensorflow/configure.py | 90 | function | |
7 | 5 | is_cygwin | tensorflow/configure.py | 94 | function | |
8 | 6 | get_input | tensorflow/configure.py | 98 | function | |
9 | 7 | symlink_force | tensorflow/configure.py | 109 | function | Force symlink, equivalent of 'ln -sf'.
Args:
target: items to link to.
link_name: name of the link. |
10 | 8 | sed_in_place | tensorflow/configure.py | 126 | function | Replace old string with new string in file.
Args:
filename: string for filename.
old: string to replace.
new: new string to replace to. |
11 | 9 | write_to_bazelrc | tensorflow/configure.py | 141 | function | |
12 | 10 | write_action_env_to_bazelrc | tensorflow/configure.py | 146 | function | |
13 | 11 | run_shell | tensorflow/configure.py | 150 | function | |
14 | 12 | cygpath | tensorflow/configure.py | 163 | function | Convert path from posix to windows. |
15 | 13 | get_python_path | tensorflow/configure.py | 168 | function | Get the python site package paths. |
16 | 14 | get_python_major_version | tensorflow/configure.py | 198 | function | Get the python major version. |
17 | 15 | setup_python | tensorflow/configure.py | 203 | function | Setup python related env variables. |
18 | 16 | reset_tf_configure_bazelrc | tensorflow/configure.py | 273 | function | Reset file that contains customized config settings. |
19 | 17 | cleanup_makefile | tensorflow/configure.py | 278 | function | Delete any leftover BUILD files from the Makefile build.
These files could interfere with Bazel parsing. |
20 | 18 | get_var | tensorflow/configure.py | 292 | function | Get boolean input from user.
If var_name is not set in env, ask user to enable query_item or not. If the
response is empty, use the default.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
enabled_by_default: boolean for default behavior.
question: optional string for how to ask for user input.
yes_reply: optional string for reply when feature is enabled.
no_reply: optional string for reply when feature is disabled.
Returns:
boolean value of the variable.
Raises:
UserInputError: if an environment variable is set, but it cannot be
interpreted as a boolean indicator, assume that the user has made a
scripting error, and will continue to provide invalid input.
Raise the error to avoid infinitely looping. |
21 | 19 | set_build_var | tensorflow/configure.py | 377 | function | Set if query_item will be enabled for the build.
Ask user if query_item will be enabled. Default is used if no input is given.
Set subprocess environment variable and write to .bazelrc if enabled.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
option_name: string for option to define in .bazelrc.
enabled_by_default: boolean for default behavior.
bazel_config_name: Name for Bazel --config argument to enable build feature. |
22 | 20 | set_action_env_var | tensorflow/configure.py | 411 | function | Set boolean action_env variable.
Ask user if query_item will be enabled. Default is used if no input is given.
Set environment variable and write to .bazelrc.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
enabled_by_default: boolean for default behavior.
question: optional string for how to ask for user input.
yes_reply: optional string for reply when feature is enabled.
no_reply: optional string for reply when feature is disabled.
bazel_config_name: adding config to .bazelrc instead of action_env. |
23 | 21 | convert_version_to_int | tensorflow/configure.py | 446 | function | Convert a version number to a integer that can be used to compare.
Version strings of the form X.YZ and X.Y.Z-xxxxx are supported. The
'xxxxx' part, for instance 'homebrew' on OS/X, is ignored.
Args:
version: a version to be converted
Returns:
An integer if converted successfully, otherwise return None. |
24 | 22 | check_bazel_version | tensorflow/configure.py | 471 | function | Check installed bazel version is between min_version and max_version.
Args:
min_version: string for minimum bazel version (must exist!).
max_version: string for maximum bazel version (must exist!).
Returns:
The bazel version detected. |
25 | 23 | set_cc_opt_flags | tensorflow/configure.py | 518 | function | Set up architecture-dependent optimization flags.
Also append CC optimization flags to bazel.rc..
Args:
environ_cp: copy of the os.environ. |
26 | 24 | set_tf_cuda_clang | tensorflow/configure.py | 546 | function | set TF_CUDA_CLANG action_env.
Args:
environ_cp: copy of the os.environ. |
27 | 25 | set_tf_download_clang | tensorflow/configure.py | 566 | function | Set TF_DOWNLOAD_CLANG action_env. |
28 | 26 | get_from_env_or_user_or_default | tensorflow/configure.py | 582 | function | Get var_name either from env, or user or default.
If var_name has been set as environment variable, use the preset value, else
ask for user input. If no input is provided, the default is used.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
ask_for_var: string for how to ask for user input.
var_default: default value string.
Returns:
string value for var_name |
29 | 27 | set_clang_cuda_compiler_path | tensorflow/configure.py | 607 | function | Set CLANG_CUDA_COMPILER_PATH. |
30 | 28 | prompt_loop_or_load_from_env | tensorflow/configure.py | 630 | function | Loop over user prompts for an ENV param until receiving a valid response.
For the env param var_name, read from the environment or verify user input
until receiving valid input. When done, set var_name in the environ_cp to its
new value.
Args:
environ_cp: (Dict) copy of the os.environ.
var_name: (String) string for name of environment variable, e.g. "TF_MYVAR".
var_default: (String) default value string.
ask_for_var: (String) string for how to ask for user input.
check_success: (Function) function that takes one argument and returns a
boolean. Should return True if the value provided is considered valid. May
contain a complex error message if error_msg does not provide enough
information. In that case, set suppress_default_error to True.
error_msg: (String) String with one and only one '%s'. Formatted with each
invalid response upon check_success(input) failure.
suppress_default_error: (Bool) Suppress the above error message in favor of
one from the check_success function.
resolve_symlinks: (Bool) Translate symbolic links into the real filepath.
n_ask_attempts: (Integer) Number of times to query for valid input before
raising an error and quitting.
Returns:
[String] The value of var_name after querying for input.
Raises:
UserInputError: if a query has been attempted n_ask_attempts times without
success, assume that the user has made a scripting error, and will
continue to provide invalid input. Raise the error to avoid infinitely
looping. |
31 | 29 | create_android_ndk_rule | tensorflow/configure.py | 696 | function | Set ANDROID_NDK_HOME and write Android NDK WORKSPACE rule. |
32 | 30 | create_android_sdk_rule | tensorflow/configure.py | 724 | function | Set Android variables and write Android SDK WORKSPACE rule. |
33 | 31 | get_ndk_api_level | tensorflow/configure.py | 788 | function | Gets the appropriate NDK API level to use for the provided Android NDK path. |
34 | 32 | set_gcc_host_compiler_path | tensorflow/configure.py | 836 | function | Set GCC_HOST_COMPILER_PATH. |
35 | 33 | reformat_version_sequence | tensorflow/configure.py | 858 | function | Reformat the version string to have the given number of sequences.
For example:
Given (7, 2) -> 7.0
(7.0.1, 2) -> 7.0
(5, 1) -> 5
(5.0.3.2, 1) -> 5
Args:
version_str: String, the version string.
sequence_count: int, an integer.
Returns:
string, reformatted version string. |
36 | 34 | set_tf_cuda_paths | tensorflow/configure.py | 881 | function | Set TF_CUDA_PATHS. |
37 | 35 | set_tf_cuda_version | tensorflow/configure.py | 892 | function | Set TF_CUDA_VERSION. |
38 | 36 | set_tf_cudnn_version | tensorflow/configure.py | 904 | function | Set TF_CUDNN_VERSION. |
39 | 37 | is_cuda_compatible | tensorflow/configure.py | 916 | function | Check compatibility between given library and cudnn/cudart libraries. |
40 | 38 | set_tf_tensorrt_version | tensorflow/configure.py | 945 | function | Set TF_TENSORRT_VERSION. |
41 | 39 | set_tf_nccl_version | tensorflow/configure.py | 962 | function | Set TF_NCCL_VERSION. |
42 | 40 | get_native_cuda_compute_capabilities | tensorflow/configure.py | 979 | function | Get native cuda compute capabilities.
Args:
environ_cp: copy of the os.environ.
Returns:
string of native cuda compute capabilities, separated by comma. |
43 | 41 | set_tf_cuda_compute_capabilities | tensorflow/configure.py | 1003 | function | Set TF_CUDA_COMPUTE_CAPABILITIES. |
44 | 42 | set_other_cuda_vars | tensorflow/configure.py | 1074 | function | Set other CUDA related variables. |
45 | 43 | set_host_cxx_compiler | tensorflow/configure.py | 1083 | function | Set HOST_CXX_COMPILER. |
46 | 44 | set_host_c_compiler | tensorflow/configure.py | 1100 | function | Set HOST_C_COMPILER. |
47 | 45 | set_computecpp_toolkit_path | tensorflow/configure.py | 1117 | function | Set COMPUTECPP_TOOLKIT_PATH. |
48 | 46 | set_trisycl_include_dir | tensorflow/configure.py | 1149 | function | Set TRISYCL_INCLUDE_DIR. |
49 | 47 | set_system_libs_flag | tensorflow/configure.py | 1216 | function | |
50 | 48 | is_reduced_optimize_huge_functions_available | tensorflow/configure.py | 1233 | function | Check to see if the system supports /d2ReducedOptimizeHugeFunctions.
The above compiler flag is a new compiler flag introduced to the Visual Studio
compiler in version 16.4 (available in Visual Studio 2019, Preview edition
only, as of 2019-11-19). TensorFlow needs this flag to massively reduce
compile times, but until 16.4 is officially released, we can't depend on it.
See also
https://groups.google.com/a/tensorflow.org/d/topic/build/SsW98Eo7l3o/discussion
Because it's very annoying to check this manually (to check the MSVC installed
versions, you need to use the registry, and it's not clear if Bazel will be
using that install version anyway), we expect enviroments who know they may
use this flag to export TF_VC_VERSION=16.4
TODO(angerson, gunan): Remove this function when TensorFlow's minimum VS
version is upgraded to 16.4.
Arguments:
environ_cp: Environment of the current execution
Returns:
boolean, whether or not /d2ReducedOptimizeHugeFunctions is available on this
machine. |
51 | 49 | set_windows_build_flags | tensorflow/configure.py | 1262 | function | Set Windows specific build options. |
52 | 50 | config_info_line | tensorflow/configure.py | 1283 | function | Helper function to print formatted help text for Bazel config options. |
53 | 51 | configure_ios | tensorflow/configure.py | 1288 | function | Configures TensorFlow for iOS builds.
This function will only be executed if `is_macos()` is true. |
54 | 52 | validate_cuda_config | tensorflow/configure.py | 1305 | function | Run find_cuda_config.py and return cuda_toolkit_path, or None. |
55 | 53 | VarsAndArithmeticObjectGraph | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 37 | class | Three vars (one in a sub-module) and compute method. |
56 | 54 | compute | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 51 | method | |
57 | 55 | ReferencesParent | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 55 | class | |
58 | 56 | CyclicModule | tensorflow/tensorflow/cc/saved_model/testdata/generate_saved_models.py | 64 | class | |
59 | 57 | tfadd | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 48 | function | |
60 | 58 | tfadd_with_ckpt | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 54 | function | |
61 | 59 | tfadd_with_ckpt_saver | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 69 | function | |
62 | 60 | tfassert_eq | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 88 | function | |
63 | 61 | tfcond | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 96 | function | |
64 | 62 | tfgather | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 104 | function | |
65 | 63 | tfmatmul | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 110 | function | |
66 | 64 | tfmatmulandadd | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 116 | function | |
67 | 65 | tffunction | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 124 | function | |
68 | 66 | tfsplits | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 135 | function | A more complex graph, including splits. |
69 | 67 | tftop_k | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 152 | function | |
70 | 68 | tfvariable_readonly | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 158 | function | |
71 | 69 | tfvariable | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 169 | function | |
72 | 70 | tfvariable_sequential_updates | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 177 | function | |
73 | 71 | export_debug_info | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 189 | function | Exports debug information from a graph.
Args:
exported_graph: A Graph that has been created by tracing a saveable view.
Returns:
Corresponding GraphDebugInfo with traces for all ops in exported_graph. |
74 | 72 | write_graph | tensorflow/tensorflow/compiler/aot/tests/make_test_graphs.py | 204 | function | Build a graph using build_graph and write it out. |
75 | 73 | set_tf_options | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/common_v1.py | 38 | function | |
76 | 74 | ReferencesParent | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/cyclic_object_graph.py | 27 | class | |
77 | 75 | Child | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/dag_object_graph.py | 27 | class | |
78 | 76 | plus | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/defun_export.py | 29 | function | |
79 | 77 | write_vocabulary_file | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/hash_table_asset_v1.py | 39 | function | Write temporary vocab file for module construction. |
80 | 78 | mnist_model | tensorflow/tensorflow/compiler/mlir/tensorflow/tests/tf_saved_model/keras.py | 27 | function | Creates a MNIST model. |
81 | 79 | adam_update_numpy | tensorflow/tensorflow/compiler/tests/adam_test.py | 34 | function | |
82 | 80 | GetRunMetadataLabels | tensorflow/tensorflow/compiler/tests/dense_layer_test.py | 36 | function | Returns all labels in run_metadata. |
83 | 81 | InLabels | tensorflow/tensorflow/compiler/tests/dense_layer_test.py | 45 | function | Returns true iff one of the labels contains substr. |
84 | 82 | ReferenceDepthwiseConv2D | tensorflow/tensorflow/compiler/tests/depthwise_conv_op_test.py | 35 | function | |
85 | 83 | multiple_tpus | tensorflow/tensorflow/compiler/tests/eager_test.py | 772 | function | |
86 | 84 | ExtractImagePatches | tensorflow/tensorflow/compiler/tests/extract_image_patches_op_test.py | 29 | class | Functional tests for ExtractImagePatches op. |
87 | 85 | pick_10 | tensorflow/tensorflow/compiler/tests/fft_test.py | 38 | function | |
88 | 86 | to_32bit | tensorflow/tensorflow/compiler/tests/fft_test.py | 45 | function | |
89 | 87 | GatherBenchmark | tensorflow/tensorflow/compiler/tests/gather_test.py | 158 | class | Microbenchmarks for the gather op. |
90 | 88 | benchmarkSliceGatherAxis0 | tensorflow/tensorflow/compiler/tests/gather_test.py | 183 | method | |
91 | 89 | benchmarkSliceGatherAxis0XLA | tensorflow/tensorflow/compiler/tests/gather_test.py | 186 | method | |
92 | 90 | benchmarkSliceGatherAxis1 | tensorflow/tensorflow/compiler/tests/gather_test.py | 189 | method | |
93 | 91 | benchmarkSliceGatherAxis1XLA | tensorflow/tensorflow/compiler/tests/gather_test.py | 192 | method | |
94 | 92 | benchmarkSliceGatherAxis4 | tensorflow/tensorflow/compiler/tests/gather_test.py | 195 | method | |
95 | 93 | benchmarkSliceGatherAxis4XLA | tensorflow/tensorflow/compiler/tests/gather_test.py | 198 | method | |
96 | 94 | benchmarkNontrivialGatherAxis0 | tensorflow/tensorflow/compiler/tests/gather_test.py | 201 | method | |
97 | 95 | benchmarkNontrivialGatherAxis0XLA | tensorflow/tensorflow/compiler/tests/gather_test.py | 204 | method | |
98 | 96 | benchmarkNontrivialGatherAxis1 | tensorflow/tensorflow/compiler/tests/gather_test.py | 207 | method | |
99 | 97 | benchmarkNontrivialGatherAxis1XLA | tensorflow/tensorflow/compiler/tests/gather_test.py | 210 | method | |
100 | 98 | benchmarkNontrivialGatherAxis4 | tensorflow/tensorflow/compiler/tests/gather_test.py | 213 | method | |
101 | 99 | benchmarkNontrivialGatherAxis4XLA | tensorflow/tensorflow/compiler/tests/gather_test.py | 216 | method | |
102 | 100 | BuilderFn | tensorflow/tensorflow/compiler/tests/gather_test.py | 163 | method | |
103 | 101 | NoRewriteSessionConfig | tensorflow/tensorflow/compiler/tests/jit_test.py | 46 | function | |
104 | 102 | CompiledKernel | tensorflow/tensorflow/compiler/tests/jit_test.py | 56 | function | Execute 'fn' as a compiled XLA kernel, with 'inputs'. |
105 | 103 | RunMetadataLabels | tensorflow/tensorflow/compiler/tests/jit_test.py | 68 | function | Returns all labels in run_metadata. |
106 | 104 | InLabels | tensorflow/tensorflow/compiler/tests/jit_test.py | 77 | function | Returns true iff one of the labels contains substr. |
107 | 105 | MetadataHasXlaRunOp | tensorflow/tensorflow/compiler/tests/jit_test.py | 82 | function | Returns true if there are XlaRun kernels in run_metadata's timeline. |
108 | 106 | Clip | tensorflow/tensorflow/compiler/tests/lstm.py | 38 | function | Clips x to the range [-1., 1.]. |
109 | 107 | LSTMCellWeightsShape | tensorflow/tensorflow/compiler/tests/lstm.py | 43 | function | Returns the shape of the weights for a single LSTM cell. |
110 | 108 | LSTMCell | tensorflow/tensorflow/compiler/tests/lstm.py | 50 | function | Unrolls a single LSTM cell with clipped activations forward by one step.
Args:
weights: Weight matrix with shape LSTMCellWeightsShape.
m_prev: Previous m states with shape [batch_size, num_nodes].
c_prev: Previous c states with shape [batch_size, num_nodes].
x: Input with shape [batch_size, num_inputs].
pad: Padding with shape [batch_size, 1]. Each padding value is either
0 or 1, where 1 indicates padding; i.e. the input is shorter than the
sequence length, and the (m, c) states should simply be passed through
from the previous states.
Returns:
The next (m, c) states, each with shape [batch_size, num_nodes]. |
111 | 109 | LSTMLayer | tensorflow/tensorflow/compiler/tests/lstm.py | 88 | function | Unrolls a layer of LSTM cells forward by the sequence length.
The sequence length is determined by the length of x_seq and pad_seq, which
must be the same.
Args:
cell_name: Base name of each cell.
weights: Weight matrix with shape LSTMCellWeightsShape.
m: Initial m states with shape [batch_size, num_nodes].
c: Initial c states with shape [batch_size, num_nodes].
x_seq: List of inputs, each with shape [batch_size, num_inputs].
The length of the list is the sequence length.
pad_seq: List of paddings, each with shape [batch_size, 1].
The length of the list is the sequence length.
Each padding value is either 0 or 1, where 1 indicates padding;
i.e. the input is shorter than the sequence length.
Returns:
List of per-sequence-step outputs, each with shape [batch_size, num_nodes].
Raises:
ValueError: If len(x_seq) != len(pad_seq). |
112 | 110 | RandomVar | tensorflow/tensorflow/compiler/tests/lstm.py | 121 | function | Returns a variable of the given shape initialized to random values. |
113 | 111 | RandomInputs | tensorflow/tensorflow/compiler/tests/lstm.py | 127 | function | Returns randomly initialized (x_seq, pad_seq) sequences. |
114 | 112 | BuildLSTMLayer | tensorflow/tensorflow/compiler/tests/lstm.py | 140 | function | Builds a single LSTM layer with random weights and inputs.
Args:
batch_size: Inputs are fed in batches of this size.
seq_length: The sequence length to unroll the LSTM layer.
num_inputs: Dimension of inputs that are fed into each LSTM cell.
num_nodes: The number of nodes in each LSTM cell.
Returns:
(out_seq, weights) pair. The out_seq is a list of per-sequence-step
outputs, each with shape [batch_size, num_nodes]. The weights are a list of
weight variables that may be trained. |
115 | 113 | LSTMBenchmark | tensorflow/tensorflow/compiler/tests/lstm_test.py | 238 | class | Mcro-benchmarks for a single layer of LSTM cells. |
116 | 114 | benchmarkLayerInference | tensorflow/tensorflow/compiler/tests/lstm_test.py | 256 | method | |
117 | 115 | benchmarkLayerInferenceXLA | tensorflow/tensorflow/compiler/tests/lstm_test.py | 260 | method | |
118 | 116 | benchmarkLayerTraining | tensorflow/tensorflow/compiler/tests/lstm_test.py | 264 | method | |
119 | 117 | benchmarkLayerTrainingXLA | tensorflow/tensorflow/compiler/tests/lstm_test.py | 268 | method | |
120 | 118 | zip_to_first_list_length | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 32 | function | |
121 | 119 | repack_diagonals | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 40 | function | |
122 | 120 | square_cases | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 95 | function | |
123 | 121 | tall_cases | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 173 | function | |
124 | 122 | fat_cases | tensorflow/tensorflow/compiler/tests/matrix_diag_ops_test.py | 261 | function | |
125 | 123 | MakePlaceholder | tensorflow/tensorflow/compiler/tests/matrix_triangular_solve_op_test.py | 36 | function | |
126 | 124 | NHWCToNCHW | tensorflow/tensorflow/compiler/tests/pooling_ops_test.py | 33 | function | Convert the input from NHWC format to NCHW.
Args:
input_tensor: a 4-D tensor, or a 4-element array representing the same.
Returns:
the converted tensor or a shape array |
127 | 125 | NCHWToNHWC | tensorflow/tensorflow/compiler/tests/pooling_ops_test.py | 48 | function | Convert the input from NCHW format to NHWC.
Args:
input_tensor: a 4-D tensor, or a 4-element array representing the same.
Returns:
the converted tensor or a shape array |
128 | 126 | numpy_reverse | tensorflow/tensorflow/compiler/tests/scan_ops_test.py | 32 | function | |
129 | 127 | handle_options | tensorflow/tensorflow/compiler/tests/scan_ops_test.py | 43 | function | Adds tf options to numpy scan ops. |
130 | 128 | space_to_batch_direct | tensorflow/tensorflow/compiler/tests/spacetobatch_op_test.py | 30 | function | Direct Python implementation of space-to-batch conversion.
This is used for tests only.
Args:
input_array: N-D array
block_shape: 1-D array of shape [num_block_dims].
paddings: 2-D array of shape [num_block_dims, 2].
Returns:
Converted tensor. |
131 | 129 | implicit_reparameterization_grad | tensorflow/tensorflow/compiler/tests/special_math_test.py | 58 | function | |
132 | 130 | xla_device | tensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py | 41 | function | |
133 | 131 | xla_device_name | tensorflow/tensorflow/compiler/tests/stateful_random_ops_test.py | 55 | function | |
134 | 132 | StatelessRandomOpsBenchmark | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 136 | class | Microbenchmarks for the stateless random ops. |
135 | 133 | benchmarkUniformF32 | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 152 | method | |
136 | 134 | benchmarkUniformF64 | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 156 | method | |
137 | 135 | benchmarkUniformF32XLA | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 160 | method | |
138 | 136 | benchmarkUniformF64XLA | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 164 | method | |
139 | 137 | BuilderFn | tensorflow/tensorflow/compiler/tests/stateless_random_ops_test.py | 141 | method | |
140 | 138 | ConvertBetweenDataFormats | tensorflow/tensorflow/compiler/tests/test_utils.py | 26 | function | Converts 4D tensor between data formats. |
141 | 139 | PermuteDimsBetweenDataFormats | tensorflow/tensorflow/compiler/tests/test_utils.py | 47 | function | Get new shape for converting between data formats. |
142 | 140 | RunWithWarmup | tensorflow/tensorflow/compiler/tests/test_utils.py | 71 | function | Runs a graph a few times to ensure that its clusters are compiled. |
143 | 141 | nhwc_to_format | tensorflow/tensorflow/compiler/tests/unary_ops_test.py | 37 | function | Converts a numpy array from NHWC format to `data_format`. |
144 | 142 | StridedSliceAssignChecker | tensorflow/tensorflow/compiler/tests/variable_ops_test.py | 422 | class | Compares the results of a slice assignment using Tensorflow and numpy. |
145 | 143 | is_compile_on_demand | tensorflow/tensorflow/compiler/tests/while_test.py | 260 | function | |
146 | 144 | parse_disabled_manifest | tensorflow/tensorflow/compiler/tests/xla_test.py | 55 | function | |
147 | 145 | Benchmark | tensorflow/tensorflow/compiler/tests/xla_test.py | 250 | function | Build a graph and run benchmarks against it, with or without XLA.
Args:
tf_bench: An instance of tf.test.Benchmark, used to run the benchmark.
builder_fn: A function that builds a graph when invoked, and returns
(name, fetches), where name is the name of the test, and fetches
is a list of tensors to fetch as output.
use_xla_jit: If true compile with the XLA JIT, otherwise use regular TF.
device: The tensorflow device to run on, e.g. "cpu", "gpu".
separate_compiled_gradients: If true put each gradient subgraph into a
separate compilation scope. This gives fine-grained control over which
portions of the graph will be compiled as a single unit. Compiling
gradients separately may yield better performance for some graphs.
The scope is named based on the scope of the forward computation as well
as the name of the gradients. As a result, the gradients will be compiled
in a scope that is separate from both the forward computation, and from
other gradients. |
148 | 146 | broadcast | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 226 | function | |
149 | 147 | clamp | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 234 | function | |
150 | 148 | conv | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 241 | function | Wraps the XLA ConvGeneralDilated operator.
ConvGeneralDilated is the most general form of XLA convolution and is
documented at
https://www.tensorflow.org/performance/xla/operation_semantics#conv_convolution
Args:
lhs: the input tensor
rhs: the kernel tensor
window_strides: the inter-window strides
padding: the padding to apply at the start and end of each input dimensions
lhs_dilation: dilation to apply between input elements
rhs_dilation: dilation to apply between kernel elements
dimension_numbers: a `ConvolutionDimensionNumbers` proto.
feature_group_count: number of feature groups for grouped convolution.
precision_config: a `xla.PrecisionConfig` proto.
name: an optional name for the operator
Returns:
A tensor representing the output of the convolution. |
151 | 149 | dot | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 291 | function | |
152 | 150 | dot_general | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 295 | function | |
153 | 151 | self_adjoint_eig | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 307 | function | |
154 | 152 | svd | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 311 | function | |
155 | 153 | random_normal | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 327 | function | |
156 | 154 | random_uniform | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 333 | function | |
157 | 155 | reduce_window | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 343 | function | Wraps the XLA ReduceWindow operator.
ReduceWindow is documented at
https://www.tensorflow.org/performance/xla/operation_semantics#reducewindow .
Args:
operand: the input tensor
init: a scalar tensor representing the initial value for the reduction
reducer: a reduction function that combines a pair of scalars.
window_dimensions: shape of the window, as a list of integers
window_strides: inter-window strides, as a list of integers. Optional; if
omitted, defaults to strides of 1.
padding: padding to apply to 'operand'. List of (low, high) pairs of
integers that specify the padding to apply before and after each
dimension. Optional; if omitted, defaults to no padding.
name: the operator name, or None.
Returns:
A tensor that represents the output of the reduce_window operator. |
158 | 156 | reshape | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 391 | function | |
159 | 157 | select | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 398 | function | |
160 | 158 | slice | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 406 | function | |
161 | 159 | gather | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 452 | function | |
162 | 160 | scatter | tensorflow/tensorflow/compiler/tf2xla/python/xla.py | 463 | function | |
163 | 161 | Sharding | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 28 | class | A class to support adding sharding attributes to Ops.
Use the factory constructors and then call apply_to_tensor:
Sharding.replicate().apply_to_tensor(tensor) |
164 | 162 | replicate | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 40 | method | Returns a replicated sharding attribute.
This causes an op to be computed in its entirety independently on all
cores in the XLA device. |
165 | 163 | assign_device | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 50 | method | Returns an AssignDevice sharding attribute.
This causes an op to be computed in its entirety only on one core in
the XLA device.
Args:
core: The core to assign this Op to. |
166 | 164 | tile | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 65 | method | Returns a Tiled sharding attribute.
This causes an op to be partially computed on multiple cores in the
XLA device.
Args:
tile_assignment: An np.ndarray describing the topology of the tiling and
which device will compute which part of the topology.
Raises:
TypeError: tile_assignment was not of np.array type.
TODO(jmolloy): This concept is nefarious and is not
something we really want to expose to users (especially as the
contract for tile_assignment is very strict). |
167 | 165 | split | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 93 | method | Returns a Sharding that splits a tensor across a dimension.
This creates a Tiled attribute, similar to tile(), but easier to use for the
common case of tiling a tensor N ways in one dimension.
Args:
tensor: A tf.Tensor to split.
split_dimension: The dimension number to split.
num_devices: The number of cores to split `tensor` over.
input_shape: The shape of the original tensor.
Raises:
ValueError: The tensor to split was smaller in the split dimension than
the number of devices to split over. |
168 | 166 | apply_to_tensor | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 128 | method | Applies this Sharding attribute to `tensor`.
Args:
tensor: A tf.Tensor to split.
assign_tuple_sharding: If the sharding type should be a tuple. |
169 | 167 | proto | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 152 | method | Return the sharding protobuf of type xla_data_pb2.OpSharding. |
170 | 168 | replicate | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 179 | function | |
171 | 169 | assign_device | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 188 | function | Returns a tensor that has AssignDevice sharding attribute. |
172 | 170 | tile | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 202 | function | Returns a tensor that has tiled sharding.
Args:
tensor: A tf.Tensor to shard.
tile_assignment: An np.ndarray describing the topology of the tiling and
which device will compute which part of the topology.
assign_tuple_sharding: If the sharding type should be a tuple.
use_sharding_op: If true, adds a sharding op to set the sharding. |
173 | 171 | split | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 224 | function | Returns a tensor that is split along the given dimension.
Args:
tensor: A tf.Tensor to split.
split_dimension: The dimension to split.
num_devices: The number of devices to partition the dimension.
assign_tuple_sharding: If the sharding type should be a tuple.
use_sharding_op: If true, adds a sharding op to set the sharding.
input_shape: The full shape of the input tensor. |
174 | 172 | get_op_sharding | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 248 | function | Returns sharding attribute of an op.
Args:
op: a TensorFlow op.
Returns:
The attribute representing XLA sharding on this op. |
175 | 173 | auto_to_manual_spmd_partition | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 260 | function | Switches from automatic SPMD partitioning to manual partitioning.
Converts a full-shaped tensor (to be automatically partitioned by SPMD
partitioner) to a shard-shaped tensor to be consumed by manually partitioned
ops.
Args:
tensor: A tf.Tensor in full shape.
manual_sharding: a serialized string of OpSharding to be used in manual
partitioning.
Returns:
A shard-shaped tensor to be consumed by manually partitioned ops. |
176 | 174 | manual_to_auto_spmd_partition | tensorflow/tensorflow/compiler/xla/experimental/xla_sharding/xla_sharding.py | 279 | function | Switches from manual partitioning to automatic SPMD partitioning.
Converts a shard-shaped tensor (manually partitioned in SPMD-style) to a
full-shaped tensor to be partitioned automatically by the SPMD partitioner.
Args:
tensor: A tf.Tensor in shard shape.
manual_sharding: a serialized string of OpSharding to be used in manual
partitioning.
full_shape: the shape of tensor before partitioning.
Returns:
A full-shaped tensor to be partitioned automatically by the SPMD
partitioner. |
177 | 175 | numpy_assert_allclose | tensorflow/tensorflow/compiler/xla/python/bfloat16_test.py | 35 | function | |
178 | 176 | register_local_backend_factory | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 101 | function | |
179 | 177 | get_local_backend | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 131 | function | Returns a local backend.
Args:
name: the backend name. If `None`, a default local backend is returned,
typically `gpu` if one is present, or `cpu` if not. If a string, the named
backend is returned or an exception raised.
Returns:
A LocalBackend object. |
180 | 178 | OpMetadata | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 152 | class | Python representation of a xla.OpMetadata protobuf. |
181 | 179 | CurrentSourceInfoMetadata | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 163 | function | Helper for use in source mapping that returns an OpMetadata object. |
182 | 180 | dtype_to_etype | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 206 | function | Convenience function for reading DTYPE_TO_XLA_ELEMENT_TYPE. |
183 | 181 | shape_from_pyval | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 272 | function | Returns a Shape that describes a tuple-tree of Numpy arrays. |
184 | 182 | execute_with_python_values | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 334 | function | Execute on one replica with Python values as arguments and output. |
185 | 183 | execute_with_python_values_replicated | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 345 | function | Execute on many replicas with Python values as arguments and output.
Arguments:
executable: the program to run.
arguments: a list of lists of Python values indexed by `[replica][arg_num]`
to pass as inputs.
backend: the backend we are targeting.
Returns:
A list of python values, one per replica. |
186 | 184 | PaddingType | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 374 | class | |
187 | 185 | window_padding_type_to_pad_values | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 379 | function | Maps PaddingType or string to pad values (list of pairs of ints). |
188 | 186 | register_custom_call_target | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 418 | function | Registers a custom call target.
Args:
name: bytes containing the name of the function.
fn: a PyCapsule object containing the function pointer.
platform: the target platform. |
189 | 187 | PaddingConfigDimension | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 433 | class | Python representation of a xla.PaddingConfigDimension protobuf. |
190 | 188 | PaddingConfig | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 443 | class | Python representation of a xla.PaddingConfig protobuf. |
191 | 189 | make_padding_config | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 451 | function | Create PaddingConfig proto from list of triples of integers.
Args:
padding_config: either a PaddingConfig or a list of integer triples
(edge_padding_low, edge_padding_high, interior_padding) representing the
configuration of the padding operation.
Returns:
A `PaddingConfig` object. |
192 | 190 | DotDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 476 | class | Python representation of a xla.DotDimensionNumbers protobuf. |
193 | 191 | make_dot_dimension_numbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 488 | function | Builds a DotDimensionNumbers object from a specification.
Args:
dimension_numbers: either a `DotDimensionNumbers` or a nested tuple
`((lhs_contract, rhs_contract), (lhs_batch, rhs_batch))` of lists of
integers representing the dimensions to treat as contracting dimensions
and batch dimensions on each input operand.
Returns:
A `DotDimensionNumbers` object. |
194 | 192 | ConvolutionDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 516 | class | Python representation of a xla.ConvolutionDimensionNumbers protobuf. |
195 | 193 | make_convolution_dimension_numbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 536 | function | Builds a ConvolutionDimensionNumbers object from a specification.
Args:
dimension_numbers: optional, either a ConvolutionDimensionNumbers object or
a tuple (lhs_spec, rhs_spec, out_spec). Each element is a string of
length N+2 identifying by position: (1) batch dimensions in lhs, rhs, and
the output with the character 'N', (2) feature dimensions in lhs and the
output with the character 'C', (3) input and output feature dimensions
in rhs with the characters 'I' and 'O' respectively, and (4) spatial
dimension correspondences between lhs, rhs, and the output using any
distinct characters. For example, to indicate dimension numbers
consistent with the Conv operation with two spatial dimensions, one
could use ('NCHW', 'OIHW', 'NCHW'). As another example, to indicate
dimension numbers consistent with the TensorFlow Conv2D operation, one
could use ('NHWC', 'HWIO', 'NHWC'). When using the latter form of
convolution dimension specification, window strides are associated with
spatial dimension character labels according to the order in which the
labels appear in the rhs_spec string, so that window_strides[0] is
matched with the dimension corresponding to the first character
appearing in rhs_spec that is not 'I' or 'O'. By default, use the same
dimension numbering as Conv and ConvWithGeneralPadding.
num_spatial_dimensions: the number of spatial dimensions.
Returns:
A `ConvolutionDimensionNumbers` object. |
196 | 194 | OpSharding | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 600 | class | Python representation of a xla.OpSharding protobuf. |
197 | 195 | PrecisionConfig | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 614 | class | Python representation of a xla.PrecisionConfig protobuf. |
198 | 196 | GatherDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 624 | class | Python representation of a xla.GatherDimensionNumbers protobuf. |
199 | 197 | ScatterDimensionNumbers | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 636 | class | Python representation of a xla.ScatterDimensionNumbers protobuf. |
200 | 198 | ReplicaGroup | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 648 | class | Python representation of a xla.ReplicaGroup protobuf. |
201 | 199 | make_replica_groups | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 662 | function | |
202 | 200 | tracebacks | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 677 | function | Context manager that enables or disables traceback collection. |
203 | 201 | heap_profile | tensorflow/tensorflow/compiler/xla/python/xla_client.py | 687 | function | Returns a gzipped pprof protocol buffer containing a heap profile. |
204 | 202 | TpuBackend | tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.py | 29 | class | XLA backend implemented using the Tpu driver API. |
205 | 203 | create | tensorflow/tensorflow/compiler/xla/python/tpu_driver/client/tpu_client.py | 36 | method | Constructs a Cloud TPU backend. |
206 | 204 | ConvertLiteralToNumpyArray | tensorflow/tensorflow/compiler/xla/python_api/xla_literal.py | 28 | function | Converts a XLA literal to a Numpy array. |
207 | 205 | ConvertNumpyArrayToLiteral | tensorflow/tensorflow/compiler/xla/python_api/xla_literal.py | 85 | function | Converts a Numpy array or a nested tuple thereof to an XLA literal. |
208 | 206 | Shape | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 29 | class | Wraps a xla_data_pb2.ShapeProto message with a convenient Python type.
Provides direct access to the underlying xla_data_pb2.ShapeProto message in
the
message attribute, along with accessor wrappers to the message's fields.
Avoid direct access to .message unless interacting directly with protobuf APIs
like CopyFrom. In other words, prefer hauling the shape around in a Shape, and
only access .message when strictly required by the protobuf API. |
209 | 207 | element_type | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 71 | method | |
210 | 208 | is_tuple | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 74 | method | |
211 | 209 | dimensions | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 77 | method | |
212 | 210 | tuple_shapes | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 82 | method | If this is a tuple, returns its sequence of constituent Shape objects.
Returns:
Tuple sub-shapes.
Raises:
ValueError: if this is not a tuple. |
213 | 211 | layout | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 95 | method | |
214 | 212 | from_pyval | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 99 | method | |
215 | 213 | CreateShapeFromNumpy | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 129 | function | Create a Shape from a Numpy array or a nested tuple structure thereof.
Args:
value: Numpy array or (possibly nested) tuple structure that bottoms out in
Numpy arrays.
Returns:
A Shape object. |
216 | 214 | CreateShapeFromDtypeAndTuple | tensorflow/tensorflow/compiler/xla/python_api/xla_shape.py | 147 | function | Create a shape from a Numpy dtype and a sequence of nonnegative integers.
Args:
dtype: a numpy dtype, e.g. np.dtype('int32').
shape_tuple: a sequence of nonnegative integers.
Returns:
A Shape object. |
217 | 215 | load_graph | tensorflow/tensorflow/examples/label_image/label_image.py | 26 | function | |
218 | 216 | read_tensor_from_image_file | tensorflow/tensorflow/examples/label_image/label_image.py | 38 | function | |
219 | 217 | load_labels | tensorflow/tensorflow/examples/label_image/label_image.py | 65 | function | |
220 | 218 | MaybeDistributionScope | tensorflow/tensorflow/examples/saved_model/integration_tests/distribution_strategy_utils.py | 48 | class | Provides a context allowing no distribution strategy. |
221 | 219 | from_name | tensorflow/tensorflow/examples/saved_model/integration_tests/distribution_strategy_utils.py | 52 | method | |
222 | 220 | make_feature_extractor | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 56 | function | Returns a Keras Model to compute a feature vector from MNIST images. |
223 | 221 | set_feature_extractor_hparams | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 72 | function | |
224 | 222 | make_classifier | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 76 | function | Returns a Keras Model to classify MNIST using feature_extractor. |
225 | 223 | wrap_keras_model_for_export | tensorflow/tensorflow/examples/saved_model/integration_tests/export_mnist_cnn.py | 87 | function | Wraps `model` for saving and loading as SavedModel. |
226 | 224 | write_vocabulary_file | tensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py | 34 | function | Write temporary vocab file for module construction. |
227 | 225 | TextEmbeddingModel | tensorflow/tensorflow/examples/saved_model/integration_tests/export_simple_text_embedding.py | 44 | class | Text embedding model.
A text embeddings model that takes a sentences on input and outputs the
sentence embedding. |
228 | 226 | TextRnnModel | tensorflow/tensorflow/examples/saved_model/integration_tests/export_text_rnn_model.py | 31 | class | Text RNN model.
A full generative text RNN model that can train and decode sentences from a
starting word. |
229 | 227 | train | tensorflow/tensorflow/examples/saved_model/integration_tests/export_text_rnn_model.py | 81 | method | |
230 | 228 | decode_greedy | tensorflow/tensorflow/examples/saved_model/integration_tests/export_text_rnn_model.py | 143 | method | |
231 | 229 | MaybeRunScriptInstead | tensorflow/tensorflow/examples/saved_model/integration_tests/integration_scripts.py | 62 | function | |
232 | 230 | load_reshaped_data | tensorflow/tensorflow/examples/saved_model/integration_tests/mnist_util.py | 34 | function | Returns MNIST or Fashion MNIST or fake train and test data. |
233 | 231 | make_feature_extractor | tensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py | 72 | function | Load a pre-trained feature extractor and wrap it for use in Keras. |
234 | 232 | make_classifier | tensorflow/tensorflow/examples/saved_model/integration_tests/use_mnist_cnn.py | 100 | function | Returns a Keras Model to classify MNIST using feature_extractor. |
235 | 233 | train | tensorflow/tensorflow/examples/saved_model/integration_tests/use_model_in_sequential_keras.py | 35 | function | Build a Keras model and train with mock data. |
236 | 234 | train | tensorflow/tensorflow/examples/saved_model/integration_tests/use_text_embedding_in_dataset.py | 34 | function | Build a Keras model and train with mock data. |
237 | 235 | StreamingAccuracyStats | tensorflow/tensorflow/examples/speech_commands/accuracy_utils.py | 24 | class | Get streaming accuracy statistics every time a new command is founded.
Attributes:
_how_many_gt: How many ground truths.
_how_many_gt_matched: How many ground truths have been matched.
_how_many_fp: How many commands have been fired as false positive.
_how_many_c: How many commands have been fired correctly.
_how_many_w: How many commands have been fired wrongly.
_gt_occurrence: A list to record which commands and when it occurs in the
input audio stream.
_previous_c: A variable to record the last status of _how_many_c.
_previous_w: A variable to record the last status of _how_many_w.
_previous_fp: A variable to record the last status of _how_many_fp. |
238 | 236 | read_ground_truth_file | tensorflow/tensorflow/examples/speech_commands/accuracy_utils.py | 52 | method | Load ground truth and timestamp pairs and store it in time order. |
239 | 237 | delta | tensorflow/tensorflow/examples/speech_commands/accuracy_utils.py | 64 | method | Compute delta of StreamingAccuracyStats against last status. |
240 | 238 | calculate_accuracy_stats | tensorflow/tensorflow/examples/speech_commands/accuracy_utils.py | 83 | method | Calculate accuracy statistics when a new commands is founded.
Given ground truth and corresponding predictions founded by
model, figure out how many were correct. Take a tolerance time, so that only
predictions up to a point in time are considered.
Args:
found_words: A list of all founded commands up to now.
up_to_time_ms: End timestamp of this audio piece.
time_tolerance_ms: The tolerance milliseconds before and after
up_to_time_ms to match a ground truth. |
241 | 239 | print_accuracy_stats | tensorflow/tensorflow/examples/speech_commands/accuracy_utils.py | 137 | method | Write a human-readable description of the statistics to stdout. |
242 | 240 | create_inference_graph | tensorflow/tensorflow/examples/speech_commands/freeze.py | 63 | function | Creates an audio model with the nodes needed for inference.
Uses the supplied arguments to create a model, and inserts the input and
output nodes that are needed to use the graph for inference.
Args:
wanted_words: Comma-separated list of the words we're trying to recognize.
sample_rate: How many samples per second are in the input audio files.
clip_duration_ms: How many samples to analyze for the audio pattern.
clip_stride_ms: How often to run recognition. Useful for models with cache.
window_size_ms: Time slice duration to estimate frequencies from.
window_stride_ms: How far apart time slices should be.
feature_bin_count: Number of frequency bands to analyze.
model_architecture: Name of the kind of model to generate.
preprocess: How the spectrogram is processed to produce features, for
example 'mfcc', 'average', or 'micro'.
Returns:
Input and output tensor objects.
Raises:
Exception: If the preprocessing mode isn't recognized. |
243 | 241 | save_graph_def | tensorflow/tensorflow/examples/speech_commands/freeze.py | 161 | function | Writes a graph def file out to disk.
Args:
file_name: Where to save the file.
frozen_graph_def: GraphDef proto object to save. |
244 | 242 | save_saved_model | tensorflow/tensorflow/examples/speech_commands/freeze.py | 176 | function | Writes a SavedModel out to disk.
Args:
file_name: Where to save the file.
sess: TensorFlow session containing the graph.
input_tensor: Tensor object defining the input's properties.
output_tensor: Tensor object defining the output's properties. |
245 | 243 | mix_in_audio_sample | tensorflow/tensorflow/examples/speech_commands/generate_streaming_test_wav.py | 55 | function | Mixes the sample data into the main track at the specified offset.
Args:
track_data: Numpy array holding main audio data. Modified in-place.
track_offset: Where to mix the sample into the main track.
sample_data: Numpy array of audio data to mix into the main track.
sample_offset: Where to start in the audio sample.
clip_duration: How long the sample segment is.
sample_volume: Loudness to mix the sample in at.
ramp_in: Length in samples of volume increase stage.
ramp_out: Length in samples of volume decrease stage. |
246 | 244 | prepare_words_list | tensorflow/tensorflow/examples/speech_commands/input_data.py | 58 | function | Prepends common tokens to the custom word list.
Args:
wanted_words: List of strings containing the custom words.
Returns:
List with the standard silence and unknown tokens added. |
247 | 245 | which_set | tensorflow/tensorflow/examples/speech_commands/input_data.py | 70 | function | Determines which data partition the file should belong to.
We want to keep files in the same training, validation, or testing sets even
if new ones are added over time. This makes it less likely that testing
samples will accidentally be reused in training when long runs are restarted
for example. To keep this stability, a hash of the filename is taken and used
to determine which set it should belong to. This determination only depends on
the name and the set proportions, so it won't change as other files are added.
It's also useful to associate particular files as related (for example words
spoken by the same person), so anything after '_nohash_' in a filename is
ignored for set determination. This ensures that 'bobby_nohash_0.wav' and
'bobby_nohash_1.wav' are always in the same set, for example.
Args:
filename: File path of the data sample.
validation_percentage: How much of the data set to use for validation.
testing_percentage: How much of the data set to use for testing.
Returns:
String, one of 'training', 'validation', or 'testing'. |
248 | 246 | load_wav_file | tensorflow/tensorflow/examples/speech_commands/input_data.py | 118 | function | Loads an audio file and returns a float PCM-encoded array of samples.
Args:
filename: Path to the .wav file to load.
Returns:
Numpy array holding the sample data as floats between -1.0 and 1.0. |
249 | 247 | save_wav_file | tensorflow/tensorflow/examples/speech_commands/input_data.py | 136 | function | Saves audio sample data to a .wav audio file.
Args:
filename: Path to save the file to.
wav_data: 2D array of float PCM-encoded audio data.
sample_rate: Samples per second to encode in the file. |
250 | 248 | get_features_range | tensorflow/tensorflow/examples/speech_commands/input_data.py | 160 | function | Returns the expected min/max for generated features.
Args:
model_settings: Information about the current model being trained.
Returns:
Min/max float pair holding the range of features.
Raises:
Exception: If preprocessing mode isn't recognized. |
251 | 249 | AudioProcessor | tensorflow/tensorflow/examples/speech_commands/input_data.py | 190 | class | Handles loading, partitioning, and preparing audio training data. |
252 | 250 | maybe_download_and_extract_dataset | tensorflow/tensorflow/examples/speech_commands/input_data.py | 205 | method | Download and extract data set tar file.
If the data set we're using doesn't already exist, this function
downloads it from the TensorFlow.org website and unpacks it into a
directory.
If the data_url is none, don't download anything and expect the data
directory to contain the correct files already.
Args:
data_url: Web location of the tar file containing the data set.
dest_directory: File path to extract data to. |
253 | 251 | prepare_data_index | tensorflow/tensorflow/examples/speech_commands/input_data.py | 247 | method | Prepares a list of the samples organized by set and label.
The training loop needs a list of all the available data, organized by
which partition it should belong to, and with ground truth labels attached.
This function analyzes the folders below the `data_dir`, figures out the
right
labels for each file based on the name of the subdirectory it belongs to,
and uses a stable hash to assign it to a data set partition.
Args:
silence_percentage: How much of the resulting data should be background.
unknown_percentage: How much should be audio outside the wanted classes.
wanted_words: Labels of the classes we want to be able to recognize.
validation_percentage: How much of the data set to use for validation.
testing_percentage: How much of the data set to use for testing.
Returns:
Dictionary containing a list of file information for each set partition,
and a lookup map for each class to determine its numeric index.
Raises:
Exception: If expected files are not found. |
254 | 252 | prepare_background_data | tensorflow/tensorflow/examples/speech_commands/input_data.py | 333 | method | Searches a folder for background noise audio, and loads it into memory.
It's expected that the background audio samples will be in a subdirectory
named '_background_noise_' inside the 'data_dir' folder, as .wavs that match
the sample rate of the training data, but can be much longer in duration.
If the '_background_noise_' folder doesn't exist at all, this isn't an
error, it's just taken to mean that no background noise augmentation should
be used. If the folder does exist, but it's empty, that's treated as an
error.
Returns:
List of raw PCM-encoded audio samples of background noise.
Raises:
Exception: If files aren't found in the folder. |
255 | 253 | prepare_processing_graph | tensorflow/tensorflow/examples/speech_commands/input_data.py | 369 | method | Builds a TensorFlow graph to apply the input distortions.
Creates a graph that loads a WAVE file, decodes it, scales the volume,
shifts it in time, adds in background noise, calculates a spectrogram, and
then builds an MFCC fingerprint from that.
This must be called with an active TensorFlow session running, and it
creates multiple placeholder inputs, and one output:
- wav_filename_placeholder_: Filename of the WAV to load.
- foreground_volume_placeholder_: How loud the main clip should be.
- time_shift_padding_placeholder_: Where to pad the clip.
- time_shift_offset_placeholder_: How much to move the clip in time.
- background_data_placeholder_: PCM sample data for background noise.
- background_volume_placeholder_: Loudness of mixed-in background.
- output_: Output 2D fingerprint of processed audio.
Args:
model_settings: Information about the current model being trained.
summaries_dir: Path to save training summary information to.
Raises:
ValueError: If the preprocessing mode isn't recognized.
Exception: If the preprocessor wasn't compiled in. |
256 | 254 | set_size | tensorflow/tensorflow/examples/speech_commands/input_data.py | 498 | method | Calculates the number of samples in the dataset partition.
Args:
mode: Which partition, must be 'training', 'validation', or 'testing'.
Returns:
Number of samples in the partition. |
257 | 255 | get_data | tensorflow/tensorflow/examples/speech_commands/input_data.py | 509 | method | Gather samples from the data set, applying transformations as needed.
When the mode is 'training', a random selection of samples will be returned,
otherwise the first N clips in the partition will be used. This ensures that
validation always uses the same samples, reducing noise in the metrics.
Args:
how_many: Desired number of samples to return. -1 means the entire
contents of this partition.
offset: Where to start when fetching deterministically.
model_settings: Information about the current model being trained.
background_frequency: How many clips will have background noise, 0.0 to
1.0.
background_volume_range: How loud the background noise will be.
time_shift: How much to randomly shift the clips by in time.
mode: Which partition to use, must be 'training', 'validation', or
'testing'.
sess: TensorFlow session that was active when processor was created.
Returns:
List of sample data for the transformed samples, and list of label indexes
Raises:
ValueError: If background samples are too short. |
258 | 256 | get_features_for_wav | tensorflow/tensorflow/examples/speech_commands/input_data.py | 612 | method | Applies the feature transformation process to the input_wav.
Runs the feature generation process (generally producing a spectrogram from
the input samples) on the WAV file. This can be useful for testing and
verifying implementations being run on other platforms.
Args:
wav_filename: The path to the input audio file.
model_settings: Information about the current model being trained.
sess: TensorFlow session that was active when processor was created.
Returns:
Numpy data array containing the generated features. |
259 | 257 | get_unprocessed_data | tensorflow/tensorflow/examples/speech_commands/input_data.py | 640 | method | Retrieve sample data for the given partition, with no transformations.
Args:
how_many: Desired number of samples to return. -1 means the entire
contents of this partition.
model_settings: Information about the current model being trained.
mode: Which partition to use, must be 'training', 'validation', or
'testing'.
Returns:
List of sample data for the samples, and list of labels in one-hot form. |
260 | 258 | load_graph | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 43 | function | Unpersists graph from file as default graph. |
261 | 259 | load_labels | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 51 | function | Read in labels, one label per line. |
262 | 260 | run_graph | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 56 | function | Runs the audio data through the graph and prints predictions. |
263 | 261 | label_wav | tensorflow/tensorflow/examples/speech_commands/label_wav.py | 77 | function | Loads the model and labels, and runs the inference to print predictions. |
264 | 262 | load_graph | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 44 | function | Unpersists graph from file as default graph. |
265 | 263 | load_labels | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 52 | function | Read in labels, one label per line. |
266 | 264 | run_graph | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 57 | function | Runs the audio data through the graph and prints predictions. |
267 | 265 | label_wav | tensorflow/tensorflow/examples/speech_commands/label_wav_dir.py | 85 | function | Loads the model and labels, and runs the inference to print predictions. |
268 | 266 | prepare_model_settings | tensorflow/tensorflow/examples/speech_commands/models.py | 39 | function | Calculates common settings needed for all models.
Args:
label_count: How many classes are to be recognized.
sample_rate: Number of audio samples per second.
clip_duration_ms: Length of each audio clip to be analyzed.
window_size_ms: Duration of frequency analysis window.
window_stride_ms: How far to move in time between frequency windows.
feature_bin_count: Number of frequency bins to use for analysis.
preprocess: How the spectrogram is processed to produce features.
Returns:
Dictionary containing common settings.
Raises:
ValueError: If the preprocessing mode isn't recognized. |
269 | 267 | create_model | tensorflow/tensorflow/examples/speech_commands/models.py | 95 | function | Builds a model of the requested architecture compatible with the settings.
There are many possible ways of deriving predictions from a spectrogram
input, so this function provides an abstract interface for creating different
kinds of models in a black-box way. You need to pass in a TensorFlow node as
the 'fingerprint' input, and this should output a batch of 1D features that
describe the audio. Typically this will be derived from a spectrogram that's
been run through an MFCC, but in theory it can be any feature vector of the
size specified in model_settings['fingerprint_size'].
The function will build the graph it needs in the current TensorFlow graph,
and return the tensorflow output that will contain the 'logits' input to the
softmax prediction process. If training flag is on, it will also return a
placeholder node that can be used to control the dropout amount.
See the implementations below for the possible model architectures that can be
requested.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
model_architecture: String specifying which kind of model to create.
is_training: Whether the model is going to be used for training.
runtime_settings: Dictionary of information about the runtime.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder.
Raises:
Exception: If the architecture type isn't recognized. |
270 | 268 | load_variables_from_checkpoint | tensorflow/tensorflow/examples/speech_commands/models.py | 153 | function | Utility function to centralize checkpoint restoration.
Args:
sess: TensorFlow session.
start_checkpoint: Path to saved checkpoint on disk. |
271 | 269 | create_single_fc_model | tensorflow/tensorflow/examples/speech_commands/models.py | 164 | function | Builds a model with a single hidden fully-connected layer.
This is a very simple model with just one matmul and bias layer. As you'd
expect, it doesn't produce very accurate results, but it is very fast and
simple, so it's useful for sanity testing.
Here's the layout of the graph:
(fingerprint_input)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
272 | 270 | create_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 207 | function | Builds a standard convolutional model.
This is roughly the network labeled as 'cnn-trad-fpool3' in the
'Convolutional Neural Networks for Small-footprint Keyword Spotting' paper:
http://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdf
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MaxPool]
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MaxPool]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This produces fairly good quality results, but can involve a large number of
weight parameters and computations. For a cheaper alternative from the same
paper with slightly less accuracy, see 'low_latency_conv' below.
During training, dropout nodes are introduced after each relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
273 | 271 | create_low_latency_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 333 | function | Builds a convolutional model with low compute requirements.
This is roughly the network labeled as 'cnn-one-fstride4' in the
'Convolutional Neural Networks for Small-footprint Keyword Spotting' paper:
http://www.isca-speech.org/archive/interspeech_2015/papers/i15_1478.pdf
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This produces slightly lower quality results than the 'conv' model, but needs
fewer weight parameters and computations.
During training, dropout nodes are introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
274 | 272 | create_low_latency_svdf_model | tensorflow/tensorflow/examples/speech_commands/models.py | 462 | function | Builds an SVDF model with low compute requirements.
This is based in the topology presented in the 'Compressing Deep Neural
Networks using a Rank-Constrained Topology' paper:
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43813.pdf
Here's the layout of the graph:
(fingerprint_input)
v
[SVDF]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This model produces lower recognition accuracy than the 'conv' model above,
but requires fewer weight parameters and, significantly fewer computations.
During training, dropout nodes are introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
The node is expected to produce a 2D Tensor of shape:
[batch, model_settings['fingerprint_width'] *
model_settings['spectrogram_length']]
with the features corresponding to the same time slot arranged contiguously,
and the oldest slot at index [:, 0], and newest at [:, -1].
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
runtime_settings: Dictionary of information about the runtime.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder.
Raises:
ValueError: If the inputs tensor is incorrectly shaped. |
275 | 273 | create_tiny_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 673 | function | Builds a convolutional model aimed at microcontrollers.
Devices like DSPs and microcontrollers can have very small amounts of
memory and limited processing power. This model is designed to use less
than 20KB of working RAM, and fit within 32KB of read-only (flash) memory.
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This doesn't produce particularly accurate results, but it's designed to be
used as the first stage of a pipeline, running on a low-energy piece of
hardware that can always be on, and then wake higher-power chips when a
possible utterance has been found, so that more accurate analysis can be done.
During training, a dropout node is introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
276 | 274 | create_tiny_embedding_conv_model | tensorflow/tensorflow/examples/speech_commands/models.py | 765 | function | Builds a convolutional model aimed at microcontrollers.
Devices like DSPs and microcontrollers can have very small amounts of
memory and limited processing power. This model is designed to use less
than 20KB of working RAM, and fit within 32KB of read-only (flash) memory.
Here's the layout of the graph:
(fingerprint_input)
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[Conv2D]<-(weights)
v
[BiasAdd]<-(bias)
v
[Relu]
v
[MatMul]<-(weights)
v
[BiasAdd]<-(bias)
v
This doesn't produce particularly accurate results, but it's designed to be
used as the first stage of a pipeline, running on a low-energy piece of
hardware that can always be on, and then wake higher-power chips when a
possible utterance has been found, so that more accurate analysis can be done.
During training, a dropout node is introduced after the relu, controlled by a
placeholder.
Args:
fingerprint_input: TensorFlow node that will output audio feature vectors.
model_settings: Dictionary of information about the model.
is_training: Whether the model is going to be used for training.
Returns:
TensorFlow node outputting logits results, and optionally a dropout
placeholder. |
277 | 275 | RecognizeResult | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 25 | class | Save recognition result temporarily.
Attributes:
founded_command: A string indicating the word just founded. Default value
is '_silence_'
score: An float representing the confidence of founded word. Default
value is zero.
is_new_command: A boolean indicating if the founded command is a new one
against the last one. Default value is False. |
278 | 276 | founded_command | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 43 | method | |
279 | 277 | founded_command | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 47 | method | |
280 | 278 | score | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 51 | method | |
281 | 279 | score | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 55 | method | |
282 | 280 | is_new_command | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 59 | method | |
283 | 281 | is_new_command | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 63 | method | |
284 | 282 | RecognizeCommands | tensorflow/tensorflow/examples/speech_commands/recognize_commands.py | 67 | class | Smooth the inference results by using average window.
Maintain a slide window over the audio stream, which adds new result(a pair of
the 1.confidences of all classes and 2.the start timestamp of input audio
clip) directly the inference produces one and removes the most previous one
and other abnormal values. Then it smooth the results in the window to get
the most reliable command in this period.
Attributes:
_label: A list containing commands at corresponding lines.
_average_window_duration: The length of average window.
_detection_threshold: A confidence threshold for filtering out unreliable
command.
_suppression_ms: Milliseconds every two reliable founded commands should
apart.
_minimum_count: An integer count indicating the minimum results the average
window should cover.
_previous_results: A deque to store previous results.
_label_count: The length of label list.
_previous_top_label: Last founded command. Initial value is '_silence_'.
_previous_top_time: The timestamp of _previous results. Default is -np.inf. |
285 | 283 | load_graph | tensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py | 80 | function | Read a tensorflow model, and creates a default graph object. |
286 | 284 | read_label_file | tensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py | 92 | function | Load a list of label. |
287 | 285 | read_wav_file | tensorflow/tensorflow/examples/speech_commands/test_streaming_accuracy.py | 101 | function | Load a wav file and return sample_rate and numpy data of float64 type. |
288 | 286 | verbosity_arg | tensorflow/tensorflow/examples/speech_commands/train.py | 480 | function | Parses verbosity argument.
Args:
value: A member of tf.logging.
Raises:
ArgumentTypeError: Not an expected value. |
289 | 287 | requires_contrib | tensorflow/tensorflow/examples/speech_commands/train_test.py | 32 | function | |
290 | 288 | DictStruct | tensorflow/tensorflow/examples/speech_commands/train_test.py | 44 | class | |
291 | 289 | wav_to_features | tensorflow/tensorflow/examples/speech_commands/wav_to_features.py | 47 | function | Converts an audio file into its corresponding feature map.
Args:
sample_rate: Expected sample rate of the wavs.
clip_duration_ms: Expected duration in milliseconds of the wavs.
window_size_ms: How long each spectrogram timeslice is.
window_stride_ms: How far to move in time between spectrogram timeslices.
feature_bin_count: How many bins to use for the feature fingerprint.
quantize: Whether to train the model for eight-bit deployment.
preprocess: Spectrogram processing mode; "mfcc", "average" or "micro".
input_wav: Path to the audio WAV file to read.
output_c_file: Where to save the generated C source file. |
292 | 290 | create_model | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 69 | function | Model to recognize digits in the MNIST dataset.
Network structure is equivalent to:
https://github.com/tensorflow/tensorflow/blob/r1.5/tensorflow/examples/tutorials/mnist/mnist_deep.py
and
https://github.com/tensorflow/models/blob/master/tutorials/image/mnist/convolutional.py
But uses the tf.keras API.
Returns:
A tf.keras.Model. |
293 | 291 | mnist_datasets | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 115 | function | |
294 | 292 | loss | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 125 | function | |
295 | 293 | compute_accuracy | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 131 | function | |
296 | 294 | train | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 140 | function | Trains model on `dataset` using `optimizer`. |
297 | 295 | train_and_export | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 184 | function | Run MNIST training and eval loop in eager mode.
Args:
flags_obj: An object containing parsed flag values. |
298 | 296 | import_and_eval | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 237 | function | |
299 | 297 | apply_clean | tensorflow/tensorflow/examples/tf2_showcase/mnist.py | 247 | function | |
300 | 298 | placeholder_inputs | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 37 | function | Generate placeholder variables to represent the input tensors.
These placeholders are used as inputs by the rest of the model building
code and will be fed from the downloaded data in the .run() loop, below.
Args:
batch_size: The batch size will be baked into both placeholders.
Returns:
images_placeholder: Images placeholder.
labels_placeholder: Labels placeholder. |
301 | 299 | fill_feed_dict | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 59 | function | Fills the feed_dict for training the given step.
A feed_dict takes the form of:
feed_dict = {
<placeholder>: <tensor of values to be passed for placeholder>,
....
}
Args:
data_set: The set of images and labels, from input_data.read_data_sets()
images_pl: The images placeholder, from placeholder_inputs().
labels_pl: The labels placeholder, from placeholder_inputs().
Returns:
feed_dict: The feed dictionary mapping from placeholders to values. |
302 | 300 | do_eval | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 87 | function | Runs one evaluation against the full epoch of data.
Args:
sess: The session in which the model has been trained.
eval_correct: The Tensor that returns the number of correct predictions.
images_placeholder: The images placeholder.
labels_placeholder: The labels placeholder.
data_set: The set of images and labels to evaluate, from
input_data.read_data_sets(). |
303 | 301 | run_training | tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py | 116 | function | Train MNIST for a number of steps. |
304 | 302 | read_data_sets | tensorflow/tensorflow/examples/tutorials/mnist/input_data.py | 266 | function | |
305 | 303 | inference | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 45 | function | Build the MNIST model up to where it may be used for inference.
Args:
images: Images placeholder, from inputs().
hidden1_units: Size of the first hidden layer.
hidden2_units: Size of the second hidden layer.
Returns:
softmax_linear: Output tensor with the computed logits. |
306 | 304 | loss | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 86 | function | Calculates the loss from the logits and the labels.
Args:
logits: Logits tensor, float - [batch_size, NUM_CLASSES].
labels: Labels tensor, int32 - [batch_size].
Returns:
loss: Loss tensor of type float. |
307 | 305 | training | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 101 | function | Sets up the training Ops.
Creates a summarizer to track the loss over time in TensorBoard.
Creates an optimizer and applies the gradients to all trainable variables.
The Op returned by this function is what must be passed to the
`sess.run()` call to cause the model to train.
Args:
loss: Loss tensor, from loss().
learning_rate: The learning rate to use for gradient descent.
Returns:
train_op: The Op for training. |
308 | 306 | evaluation | tensorflow/tensorflow/examples/tutorials/mnist/mnist.py | 130 | function | Evaluate the quality of the logits at predicting the label.
Args:
logits: Logits tensor, float - [batch_size, NUM_CLASSES].
labels: Labels tensor, int32 - [batch_size], with values in the
range [0, NUM_CLASSES).
Returns:
A scalar int32 tensor with the number of examples (out of batch_size)
that were predicted correctly. |
309 | 307 | train | tensorflow/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py | 38 | function | |
310 | 308 | word2vec_basic | tensorflow/tensorflow/examples/tutorials/word2vec/word2vec_basic.py | 49 | function | Example of building, training and visualizing a word2vec model. |
311 | 309 | suppress_exception | tensorflow/tensorflow/lite/examples/experimental_new_converter/stack_trace_example.py | 37 | function | |
312 | 310 | load_labels | tensorflow/tensorflow/lite/examples/python/label_image.py | 29 | function | |
313 | 311 | dynamic_rnn | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn.py | 42 | function | Creates a recurrent neural network specified by RNNCell `cell`.
Performs fully dynamic unrolling of `inputs`.
Example:
```python
# create a BasicRNNCell
rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size)
# 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size]
# defining initial state
initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32)
# 'state' is a tensor of shape [batch_size, cell_state_size]
outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data,
initial_state=initial_state,
dtype=tf.float32)
```
```python
# create 2 LSTMCells
rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size) for size in [128, 256]]
# create a RNN cell composed sequentially of a number of RNNCells
multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers)
# 'outputs' is a tensor of shape [batch_size, max_time, 256]
# 'state' is a N-tuple where N is the number of LSTMCells containing a
# tf.nn.rnn_cell.LSTMStateTuple for each cell
outputs, state = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
```
Args:
cell: An instance of RNNCell.
inputs: The RNN inputs.
If `time_major == False` (default), this must be a `Tensor` of shape:
`[batch_size, max_time, ...]`, or a nested tuple of such elements.
If `time_major == True`, this must be a `Tensor` of shape: `[max_time,
batch_size, ...]`, or a nested tuple of such elements. This may also be
a (possibly nested) tuple of Tensors satisfying this property. The
first two dimensions must match across all the inputs, but otherwise the
ranks and other shape components may differ. In this case, input to
`cell` at each time-step will replicate the structure of these tuples,
except for the time dimension (from which the time is taken). The input
to `cell` at each time step will be a `Tensor` or (possibly nested)
tuple of Tensors each with dimensions `[batch_size, ...]`.
sequence_length: (optional) An int32/int64 vector sized `[batch_size]`. Used
to copy-through state and zero-out outputs when past a batch element's
sequence length. So it's more for performance than correctness.
initial_state: (optional) An initial state for the RNN. If `cell.state_size`
is an integer, this must be a `Tensor` of appropriate type and shape
`[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this
should be a tuple of tensors having shapes `[batch_size, s] for s in
cell.state_size`.
dtype: (optional) The data type for the initial state and expected output.
Required if initial_state is not provided or RNN state has a heterogeneous
dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency and
can be run in parallel, will be. This parameter trades off time for
space. Values >> 1 use more memory but take less time, while smaller
values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs which
would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors. If true,
these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,
these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using
`time_major = True` is a bit more efficient because it avoids transposes
at the beginning and end of the RNN calculation. However, most TensorFlow
data is batch-major, so by default this function accepts input and emits
output in batch-major form.
scope: VariableScope for the created subgraph; defaults to "rnn".
Returns:
A pair (outputs, state) where:
outputs: The RNN output `Tensor`.
If time_major == False (default), this will be a `Tensor` shaped:
`[batch_size, max_time, cell.output_size]`.
If time_major == True, this will be a `Tensor` shaped:
`[max_time, batch_size, cell.output_size]`.
Note, if `cell.output_size` is a (possibly nested) tuple of integers
or `TensorShape` objects, then `outputs` will be a tuple having the
same structure as `cell.output_size`, containing Tensors having shapes
corresponding to the shape data in `cell.output_size`.
state: The final state. If `cell.state_size` is an int, this
will be shaped `[batch_size, cell.state_size]`. If it is a
`TensorShape`, this will be shaped `[batch_size] + cell.state_size`.
If it is a (possibly nested) tuple of ints or `TensorShape`, this will
be a tuple having the corresponding shapes. If cells are `LSTMCells`
`state` will be a tuple containing a `LSTMStateTuple` for each cell.
Raises:
TypeError: If `cell` is not an instance of RNNCell.
ValueError: If inputs is None or an empty list.
RuntimeError: If not using control flow v2. |
314 | 312 | bidirectional_dynamic_rnn | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn.py | 279 | function | Creates a dynamic version of bidirectional recurrent neural network.
Takes input and builds independent forward and backward RNNs. The input_size
of forward and backward cell must match. The initial state for both directions
is zero by default (but can be set optionally) and no intermediate states are
ever returned -- the network is fully unrolled for the given (passed in)
length(s) of the sequence(s) or completely unrolled if length(s) is not
given.
Args:
cell_fw: An instance of RNNCell, to be used for forward direction.
cell_bw: An instance of RNNCell, to be used for backward direction.
inputs: The RNN inputs.
If time_major == False (default), this must be a tensor of shape:
`[batch_size, max_time, ...]`, or a nested tuple of such elements.
If time_major == True, this must be a tensor of shape: `[max_time,
batch_size, ...]`, or a nested tuple of such elements.
sequence_length: (optional) An int32/int64 vector, size `[batch_size]`,
containing the actual lengths for each of the sequences in the batch. If
not provided, all batch entries are assumed to be full sequences; and time
reversal is applied from time `0` to `max_time` for each sequence.
initial_state_fw: (optional) An initial state for the forward RNN. This must
be a tensor of appropriate type and shape `[batch_size,
cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a
tuple of tensors having shapes `[batch_size, s] for s in
cell_fw.state_size`.
initial_state_bw: (optional) Same as for `initial_state_fw`, but using the
corresponding properties of `cell_bw`.
dtype: (optional) The data type for the initial states and expected output.
Required if initial_states are not provided or RNN states have a
heterogeneous dtype.
parallel_iterations: (Default: 32). The number of iterations to run in
parallel. Those operations which do not have any temporal dependency and
can be run in parallel, will be. This parameter trades off time for
space. Values >> 1 use more memory but take less time, while smaller
values use less memory but computations take longer.
swap_memory: Transparently swap the tensors produced in forward inference
but needed for back prop from GPU to CPU. This allows training RNNs which
would typically not fit on a single GPU, with very minimal (or no)
performance penalty.
time_major: The shape format of the `inputs` and `outputs` Tensors. If true,
these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false,
these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using
`time_major = True` is a bit more efficient because it avoids transposes
at the beginning and end of the RNN calculation. However, most TensorFlow
data is batch-major, so by default this function accepts input and emits
output in batch-major form.
scope: VariableScope for the created subgraph; defaults to
"bidirectional_rnn"
Returns:
A tuple (outputs, output_states) where:
outputs: A tuple (output_fw, output_bw) containing the forward and
the backward rnn output `Tensor`.
If time_major == False (default),
output_fw will be a `Tensor` shaped:
`[batch_size, max_time, cell_fw.output_size]`
and output_bw will be a `Tensor` shaped:
`[batch_size, max_time, cell_bw.output_size]`.
If time_major == True,
output_fw will be a `Tensor` shaped:
`[max_time, batch_size, cell_fw.output_size]`
and output_bw will be a `Tensor` shaped:
`[max_time, batch_size, cell_bw.output_size]`.
It returns a tuple instead of a single concatenated `Tensor`, unlike
in the `bidirectional_rnn`. If the concatenated one is preferred,
the forward and backward outputs can be concatenated as
`tf.concat(outputs, 2)`.
output_states: A tuple (output_state_fw, output_state_bw) containing
the forward and the backward final states of bidirectional rnn.
Raises:
TypeError: If `cell_fw` or `cell_bw` is not an instance of `RNNCell`. |
315 | 313 | TfLiteRNNCell | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 39 | class | The most basic RNN cell.
This is used only for TfLite, it provides hints and it also makes the
variables in the desired for the tflite ops. |
316 | 314 | state_size | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 88 | method | |
317 | 315 | output_size | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 92 | method | |
318 | 316 | build | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 95 | method | Builds the RNN cell.
Args:
inputs_shape: Rnn input tensor shape.
Raises:
ValueError: If last dimension of the input shape is not known. |
319 | 317 | call | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 127 | method | Most basic RNN: output = new_state = act(W * input + U * state + B). |
320 | 318 | get_config | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 150 | method | |
321 | 319 | add_variable_wrapped | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 110 | method | |
322 | 320 | TFLiteLSTMCell | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 162 | class | Long short-term memory unit (LSTM) recurrent network cell.
This is used only for TfLite, it provides hints and it also makes the
variables in the desired for the tflite ops (transposed and separated).
The default non-peephole implementation is based on:
https://pdfs.semanticscholar.org/1154/0131eae85b2e11d53df7f1360eeb6476e7f4.pdf
Felix Gers, Jurgen Schmidhuber, and Fred Cummins.
"Learning to forget: Continual prediction with LSTM." IET, 850-855, 1999.
The peephole implementation is based on:
https://research.google.com/pubs/archive/43905.pdf
Hasim Sak, Andrew Senior, and Francoise Beaufays.
"Long short-term memory recurrent neural network architectures for
large scale acoustic modeling." INTERSPEECH, 2014.
The class uses optional peep-hole connections, optional cell clipping, and
an optional projection layer.
Note that this cell is not optimized for performance. Please use
`tf.contrib.cudnn_rnn.CudnnLSTM` for better performance on GPU, or
`tf.contrib.rnn.LSTMBlockCell` and `tf.contrib.rnn.LSTMBlockFusedCell` for
better performance on CPU. |
323 | 321 | state_size | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 284 | method | |
324 | 322 | output_size | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 288 | method | |
325 | 323 | build | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 291 | method | Build TfLite LSTM cell graph.
Args:
inputs_shape: The inputs_shape must be known, and is [batch_size,
input_size] shape.
Raises:
ValueError: if the inputs_shape is invalid. |
326 | 324 | call | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 392 | method | Run one step of LSTM.
Args:
inputs: input Tensor, 2D, `[batch, num_units]`.
state: if `state_is_tuple` is False, this must be a state Tensor, `2-D,
[batch, state_size]`. If `state_is_tuple` is True, this must be a tuple
of state Tensors, both `2-D`, with column sizes `c_state` and `m_state`.
Returns:
A tuple containing:
- A `2-D, [batch, output_dim]`, Tensor representing the output of the
LSTM after reading `inputs` when previous state was `state`.
Here output_dim is:
num_proj if num_proj was set,
num_units otherwise.
- Tensor(s) representing the new state of LSTM after reading `inputs` when
the previous state was `state`. Same type and shape(s) as `state`.
Raises:
ValueError: If input size cannot be inferred from inputs via
static shape inference. |
327 | 325 | get_config | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 519 | method | |
328 | 326 | add_variable_wrapped | tensorflow/tensorflow/lite/experimental/examples/lstm/rnn_cell.py | 317 | method | |
329 | 327 | audio_microfrontend | tensorflow/tensorflow/lite/experimental/microfrontend/python/ops/audio_microfrontend_op.py | 34 | function | Audio Microfrontend Op.
This Op converts a sequence of audio data into one or more
feature vectors containing filterbanks of the input. The
conversion process uses a lightweight library to perform:
1. A slicing window function
2. Short-time FFTs
3. Filterbank calculations
4. Noise reduction
5. PCAN Auto Gain Control
6. Logarithmic scaling
Args:
audio: 1D Tensor, int16 audio data in temporal ordering.
sample_rate: Integer, the sample rate of the audio in Hz.
window_size: Integer, length of desired time frames in ms.
window_step: Integer, length of step size for the next frame in ms.
num_channels: Integer, the number of filterbank channels to use.
upper_band_limit: Float, the highest frequency included in the filterbanks.
lower_band_limit: Float, the lowest frequency included in the filterbanks.
smoothing_bits: Int, scale up signal by 2^(smoothing_bits) before reduction.
even_smoothing: Float, smoothing coefficient for even-numbered channels.
odd_smoothing: Float, smoothing coefficient for odd-numbered channels.
min_signal_remaining: Float, fraction of signal to preserve in smoothing.
enable_pcan: Bool, enable PCAN auto gain control.
pcan_strength: Float, gain normalization exponent.
pcan_offset: Float, positive value added in the normalization denominator.
gain_bits: Int, number of fractional bits in the gain.
enable_log: Bool, enable logarithmic scaling of filterbanks.
scale_shift: Integer, scale filterbanks by 2^(scale_shift).
left_context: Integer, number of preceding frames to attach to each frame.
right_context: Integer, number of preceding frames to attach to each frame.
frame_stride: Integer, M frames to skip over, where output[n] = frame[n*M].
zero_padding: Bool, if left/right context is out-of-bounds, attach frame of
zeroes. Otherwise, frame[0] or frame[size-1] will be copied.
out_scale: Integer, divide all filterbanks by this number.
out_type: DType, type of the output Tensor, defaults to UINT16.
Returns:
filterbanks: 2D Tensor, each row is a time frame, each column is a channel.
Raises:
ValueError: If the audio tensor is not explicitly a vector. |
330 | 328 | SupportedOp | tensorflow/tensorflow/lite/experimental/tensorboard/ops_util.py | 26 | class | Spec of supported ops.
Args:
op: string of op name. |
331 | 329 | get_potentially_supported_ops | tensorflow/tensorflow/lite/experimental/tensorboard/ops_util.py | 35 | function | Returns operations potentially supported by TensorFlow Lite.
The potentially support list contains a list of ops that are partially or
fully supported, which is derived by simply scanning op names to check whether
they can be handled without real conversion and specific parameters.
Given that some ops may be partially supported, the optimal way to determine
if a model's operations are supported is by converting using the TensorFlow
Lite converter.
Returns:
A list of SupportedOp. |
332 | 330 | time_wrapping | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation.py | 29 | function | Generate (molecule/denominator)x speed data. |
333 | 331 | augment_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_augmentation.py | 43 | function | Perform data augmentation. |
334 | 332 | DataLoader | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load.py | 35 | class | Loads data and prepares for training. |
335 | 333 | get_data_file | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load.py | 50 | method | Get train, valid and test data from files. |
336 | 334 | pad | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load.py | 66 | method | Get neighbour padding. |
337 | 335 | format_support_func | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load.py | 81 | method | Support function for format.(Helps format train, valid and test.) |
338 | 336 | format | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_load.py | 98 | method | Format data(including padding, etc.) and get the dataset for the model. |
339 | 337 | prepare_original_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py | 46 | function | Read collected data from files. |
340 | 338 | generate_negative_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py | 86 | function | Generate negative data labeled as 'negative6~8'. |
341 | 339 | write_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_prepare.py | 143 | function | |
342 | 340 | read_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split.py | 40 | function | |
343 | 341 | split_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split.py | 51 | function | Splits data into train, validation and test according to ratio. |
344 | 342 | person_split | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/data_split_person.py | 41 | function | Split data by person. |
345 | 343 | reshape_function | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 37 | function | |
346 | 344 | calculate_model_size | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 42 | function | |
347 | 345 | build_cnn | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 51 | function | Builds a convolutional neural network in Keras. |
348 | 346 | build_lstm | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 78 | function | Builds an LSTM in Keras. |
349 | 347 | load_data | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 93 | function | |
350 | 348 | build_net | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 101 | function | |
351 | 349 | train_net | tensorflow/tensorflow/lite/micro/examples/magic_wand/train/train.py | 111 | function | Trains the model. |
352 | 350 | to_cc | tensorflow/tensorflow/lite/micro/examples/micro_speech/CMSIS/create_constants.py | 26 | function | Writes table values to a C++ source file. |
353 | 351 | to_h | tensorflow/tensorflow/lite/micro/examples/micro_speech/CMSIS/create_constants.py | 44 | function | Writes a header file for the table values. |
354 | 352 | new_data_to_array | tensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/captured_data_to_wav.py | 28 | function | |
355 | 353 | new_data_to_array | tensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/compare_1k.py | 29 | function | Converts file information to an in-memory array. |
356 | 354 | to_float | tensorflow/tensorflow/lite/micro/examples/micro_speech/apollo3/compare_1k.py | 63 | function | |
357 | 355 | check_file_existence | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 52 | function | |
358 | 356 | show_and_save_bitmaps | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 60 | function | Display and save a list of bitmaps.
Args:
input_file: input file name
bitmap_list: list of numpy arrays to represent bitmap images
channels: color channel count |
359 | 357 | reshape_bitmaps | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 87 | function | Reshape flat integer arrays.
Args:
frame_list: list of 1-D arrays to represent raw image data
width: image width in pixels
height: image height in pixels
channels: color channel count
Returns:
list of numpy arrays to represent bitmap images |
360 | 358 | parse_file | tensorflow/tensorflow/lite/micro/examples/person_detection/utils/raw_to_bitmap.py | 109 | function | Convert log file to array of pixels.
Args:
inputfile: log file to parse
width: image width in pixels
height: image height in pixels
channels: color channel count
Returns:
list 1-D arrays to represent raw image data. |
361 | 359 | generate_conv_model | tensorflow/tensorflow/lite/micro/testing/generate_test_models.py | 34 | function | Creates a basic Keras model and converts to tflite.
This model does not make any relevant classifications. It only exists to
generate a model that is designed to run on embedded devices. |
362 | 360 | rename_example_subfolder_files | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 29 | function | Moves source files in example subfolders to equivalents at root. |
363 | 361 | move_person_data | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 41 | function | Moves the downloaded person model into the examples folder. |
364 | 362 | move_person_data_experimental | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 61 | function | Moves the downloaded person model into the examples folder. |
365 | 363 | move_image_data_experimental | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 83 | function | Moves the downloaded image detection model into the examples folder. |
366 | 364 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/fix_arduino_subfolders.py | 124 | function | Converts the raw arguments into accessible flags. |
367 | 365 | sanitize_xml | tensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py | 29 | function | Uses a allowlist to avoid generating bad XML. |
368 | 366 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/generate_keil_project.py | 82 | function | Converts the raw arguments into accessible flags. |
369 | 367 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/merge_arduino_zips.py | 39 | function | Converts the raw arguments into accessible flags. |
370 | 368 | replace_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 29 | function | Updates any includes to reference the new Arduino library paths. |
371 | 369 | check_ino_functions | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 51 | function | Ensures the required functions exist. |
372 | 370 | add_example_ino_library_include | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 65 | function | Makes sure the example includes the header that loads the library. |
373 | 371 | replace_example_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 71 | function | Updates any includes for local example files. |
374 | 372 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/transform_arduino_source.py | 108 | function | Converts the raw arguments into accessible flags. |
375 | 373 | replace_arduino_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 36 | function | Updates any includes to reference the new Arduino library paths. |
376 | 374 | check_ino_functions | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 58 | function | Ensures the required functions exist. |
377 | 375 | add_example_ino_library_include | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 72 | function | Makes sure the example includes the header that loads the library. |
378 | 376 | replace_arduino_example_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 78 | function | Updates any includes for local example files. |
379 | 377 | replace_esp_example_includes | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 92 | function | Updates any includes for local example files. |
380 | 378 | transform_arduino_sources | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 109 | function | Transform sources for the Arduino platform.
Args:
input_lines: A sequence of lines from the input file to process.
flags: Flags indicating which transformation(s) to apply.
Returns:
The transformed output as a string. |
381 | 379 | transform_esp_sources | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 138 | function | Transform sources for the ESP-IDF platform.
Args:
input_lines: A sequence of lines from the input file to process.
flags: Flags indicating which transformation(s) to apply.
Returns:
The transformed output as a string. |
382 | 380 | parse_args | tensorflow/tensorflow/lite/micro/tools/make/transform_source.py | 171 | function | Converts the raw arguments into accessible flags. |
383 | 381 | OpsSet | tensorflow/tensorflow/lite/python/convert.py | 80 | class | Enum class defining the sets of ops available to generate TFLite models.
WARNING: Experimental interface, subject to change. |
384 | 382 | get_options | tensorflow/tensorflow/lite/python/convert.py | 115 | method | Returns a list of OpsSet options as a list of strings. |
385 | 383 | ConverterError | tensorflow/tensorflow/lite/python/convert.py | 120 | class | Raised when an error occurs during model conversion. |
386 | 384 | mlir_quantize | tensorflow/tensorflow/lite/python/convert.py | 125 | function | Quantize `input_data_str` with calibration results.
Args:
input_data_str: Input data in serialized form (e.g. a TFLITE model with
calibration results).
disable_per_channel: Bool indicating whether to do per-channel or per-tensor
quantization
fully_quantize: Bool indicating whether to fully quantize the model. Besides
model body, the input/output will be quantized as well.
inference_type: Data type for the activations. The default value is int8.
Returns:
Quantized model in serialized form (e.g. a TFLITE model) with floating-point
inputs and outputs. |
387 | 385 | mlir_sparsify | tensorflow/tensorflow/lite/python/convert.py | 150 | function | Sparsify `input_data_str` to encode sparse tensor with proper format.
Args:
input_data_str: Input data in serialized form (e.g. a TFLITE model).
Returns:
Sparsified model in serialized form (e.g. a TFLITE model). |
388 | 386 | toco_convert_protos | tensorflow/tensorflow/lite/python/convert.py | 162 | function | Convert `input_data_str` according to model and toco parameters.
Unless you know what you are doing consider using
the more friendly `tf.compat.v1.lite.toco_convert`.
Args:
model_flags_str: Serialized proto describing model properties, see
`toco/model_flags.proto`.
toco_flags_str: Serialized proto describing conversion properties, see
`toco/toco_flags.proto`.
input_data_str: Input data in serialized form (e.g. a graphdef is common)
debug_info_str: Serialized `GraphDebugInfo` proto describing logging
information. (default None)
enable_mlir_converter: Enables MLIR-based conversion instead of the default
TOCO conversion. (default False)
Returns:
Converted model in serialized form (e.g. a TFLITE model is common).
Raises:
ConverterError: When conversion fails in TFLiteConverter, usually due to
ops not being supported.
RuntimeError: When conversion fails, an exception is raised with the error
message embedded. |
389 | 387 | build_toco_convert_protos | tensorflow/tensorflow/lite/python/convert.py | 291 | function | Builds protocol buffers describing a conversion of a model using TOCO.
Typically this is to convert from TensorFlow GraphDef to TFLite, in which
case the default `input_format` and `output_format` are sufficient.
Args:
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
inference_type: Target data type of real-number arrays in the output file.
Must be `{tf.float32, tf.uint8, tf.int8}`. (default tf.float32)
inference_input_type: Target data type of real-number input arrays. Allows
for a different type for input arrays in the case of quantization. Must be
`{tf.float32, tf.uint8, tf.int8}`. (default `inference_type`)
input_format: Type of data to read Currently must be
`{TENSORFLOW_GRAPHDEF}`. (default TENSORFLOW_GRAPHDEF)
input_shapes: Input array shape. It needs to be a list of the same length as
`input_tensors`, or None. (default None)
output_format: Output file format. Currently must be `{TFLITE,
GRAPHVIZ_DOT}`. (default TFLITE)
quantized_input_stats: List of tuples of floats representing the mean and
standard deviation. Each tuple maps to the corresponding input tensor.
Only need if `inference_input_type` is `QUANTIZED_UINT8` or `INT8`.
real_input_value = (quantized_input_value - mean_value) / std_dev_value.
(default None)
default_ranges_stats: Tuple of integers representing (min, max) range values
for all arrays without a specified range. Intended for experimenting with
quantization via "dummy quantization". (default None)
drop_control_dependency: Boolean indicating whether to drop control
dependencies silently. This is due to TFLite not supporting control
dependencies. (default True)
reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant
nodes in unexpected locations. Used when the location of the FakeQuant
nodes is preventing graph transformations necessary to convert the graph.
Results in a graph that differs from the quantized training graph,
potentially causing differing arithmetic behavior. (default False)
allow_custom_ops: Boolean indicating whether to allow custom operations.
When false any unknown operation is an error. When true, custom ops are
created for any op that is unknown. The developer will need to provide
these to the TensorFlow Lite runtime with a custom resolver. (default
False)
custom_opdefs: List of strings representing custom ops OpDefs that are
included in the GraphDef. Required when using custom operations with the
MLIR-based converter. (default None)
change_concat_input_ranges: Boolean to change behavior of min/max ranges for
inputs and outputs of the concat operator for quantized models. Changes
the ranges of concat operator overlap when true. (default False)
post_training_quantize: Boolean indicating whether to quantize the weights
of the converted float model. Model size will be reduced and there will be
latency improvements (at the cost of accuracy). (default False)
quantize_to_float16: Boolean indicating whether to convert float buffers to
float16. (default False)
dump_graphviz_dir: Full filepath of folder to dump the graphs at various
stages of processing GraphViz .dot files. Preferred over
--output_format=GRAPHVIZ_DOT in order to keep the requirements of the
output file. (default None)
dump_graphviz_video: Boolean indicating whether to dump the graph after
every graph transformation. (default False)
target_ops: Experimental flag, subject to change. Set of OpsSet options
indicating which converter to use. (default set([OpsSet.TFLITE_BUILTINS]))
allow_nonexistent_arrays: Allow specifying array names that don't exist or
are unused in the final graph. (default False)
debug_info: `GraphDebugInfo` proto containing the stack traces for the
original nodes referred by the converted graph.
conversion_summary_dir: A string, the path to the generated conversion logs.
saved_model_dir: Filepath of the saved model to be converted. This value
will be non-empty only when the saved model import path will be used.
Otherwises, the graph def-based conversion will be processed.
saved_model_version: SavedModel file format version of The saved model file
to be converted. This value will be set only when the SavedModel import
path will be used.
saved_model_tags: Set of string saved model tags, formatted in the
comma-separated value. This value will be set only when the SavedModel
import path will be used.
saved_model_exported_names: Names to be exported (default: export all) when
the saved model import path is on. This value will be set only when the
SavedModel import path will be used.
Returns:
model_flags, toco_flags, debug_info: three protocol buffers describing the
conversion process and debug information.
Raises:
ValueError:
If the input tensor type is unknown
Missing mean_values or std_dev_values
RuntimeError: If TOCO fails to convert (in which case the runtime error's
error text will contain the TOCO error log) |
390 | 388 | toco_convert_graph_def | tensorflow/tensorflow/lite/python/convert.py | 485 | function | "Convert a model using TOCO.
This function is used to convert GraphDefs that cannot be loaded into
TensorFlow to TFLite. Conversion can be customized by providing arguments
that are forwarded to `build_toco_convert_protos` (see documentation for
details).
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_arrays_with_shape: Tuple of strings representing input tensor names
and list of integers representing input shapes
(e.g., [("foo" : [1, 16, 16, 3])]). Use only when graph cannot be loaded
into TensorFlow and when `input_tensors` is None. (default None)
output_arrays: List of output tensors to freeze graph with. Use only when
graph cannot be loaded into TensorFlow and when `output_tensors` is None.
(default None)
enable_mlir_converter: Enables MLIR-based conversion instead of TOCO
conversion.
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`. |
391 | 389 | toco_convert_impl | tensorflow/tensorflow/lite/python/convert.py | 541 | function | "Convert a model using TOCO.
Typically this function is used to convert from TensorFlow GraphDef to TFLite.
Conversion can be customized by providing arguments that are forwarded to
`build_toco_convert_protos` (see documentation for details).
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
enable_mlir_converter: Enables MLIR-based conversion instead of TOCO
conversion.
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`. |
392 | 390 | toco_convert | tensorflow/tensorflow/lite/python/convert.py | 580 | function | Convert a model using TOCO.
Typically this function is used to convert from TensorFlow GraphDef to TFLite.
Conversion can be customized by providing arguments that are forwarded to
`build_toco_convert_protos` (see documentation for details). This function has
been deprecated. Please use `lite.TFLiteConverter` instead.
Args:
input_data: Input data (i.e. often `sess.graph_def`),
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
*args: See `build_toco_convert_protos`,
**kwargs: See `build_toco_convert_protos`.
Returns:
The converted data. For example if TFLite was the destination, then
this will be a tflite flatbuffer in a bytes array.
Raises:
Defined in `build_toco_convert_protos`. |
393 | 391 | get_meta_graph_def | tensorflow/tensorflow/lite/python/convert_saved_model.py | 46 | function | Validate saved_model and extract MetaGraphDef.
Args:
saved_model_dir: saved_model path to convert.
tag_set: Set of tag(s) of the MetaGraphDef to load.
Returns:
The meta_graph_def used for tflite conversion.
Raises:
ValueError: No valid MetaGraphDef for given tag_set. |
394 | 392 | get_signature_def | tensorflow/tensorflow/lite/python/convert_saved_model.py | 63 | function | Get the signature def from meta_graph with given signature_key.
Args:
meta_graph: meta_graph_def.
signature_key: signature_def in the meta_graph_def.
Returns:
The signature_def used for tflite conversion.
Raises:
ValueError: Given signature_key is not valid for this meta_graph. |
395 | 393 | get_inputs_outputs | tensorflow/tensorflow/lite/python/convert_saved_model.py | 88 | function | Get inputs and outputs from SignatureDef.
Args:
signature_def: SignatureDef in the meta_graph_def for conversion.
Returns:
The inputs and outputs in the graph for conversion. |
396 | 394 | freeze_saved_model | tensorflow/tensorflow/lite/python/convert_saved_model.py | 155 | function | Converts a SavedModel to a frozen graph.
Args:
saved_model_dir: SavedModel directory to convert.
input_arrays: List of input tensors to freeze graph with. Uses input arrays
from SignatureDef when none are provided.
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo": : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" : None}).
output_arrays: List of output tensors to freeze graph with. Uses output
arrays from SignatureDef when none are provided.
tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present.
signature_key: Key identifying SignatureDef containing inputs and outputs.
Returns:
frozen_graph_def: Frozen GraphDef.
in_tensors: List of input tensors for the graph.
out_tensors: List of output tensors for the graph.
graph: `Graph` object.
Raises:
ValueError:
SavedModel doesn't contain a MetaGraphDef identified by tag_set.
signature_key is not in the MetaGraphDef.
assets/ directory is in the MetaGraphDef.
input_shapes does not match the length of input_arrays.
input_arrays or output_arrays are not valid. |
397 | 395 | Delegate | tensorflow/tensorflow/lite/python/interpreter.py | 42 | class | Python wrapper class to manage TfLiteDelegate objects.
The shared library is expected to have two functions:
TfLiteDelegate* tflite_plugin_create_delegate(
char**, char**, size_t, void (*report_error)(const char *))
void tflite_plugin_destroy_delegate(TfLiteDelegate*)
The first one creates a delegate object. It may return NULL to indicate an
error (with a suitable error message reported by calling report_error()).
The second one destroys delegate object and must be called for every
created delegate object. Passing NULL as argument value is allowed, i.e.
tflite_plugin_destroy_delegate(tflite_plugin_create_delegate(...))
always works. |
398 | 396 | report | tensorflow/tensorflow/lite/python/interpreter.py | 101 | method | |
399 | 397 | load_delegate | tensorflow/tensorflow/lite/python/interpreter.py | 132 | function | Returns loaded Delegate object.
Args:
library: Name of shared library containing the
[TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates).
options: Dictionary of options that are required to load the delegate. All
keys and values in the dictionary should be convertible to str. Consult
the documentation of the specific delegate for required and legal options.
(default None)
Returns:
Delegate object.
Raises:
ValueError: Delegate failed to load.
RuntimeError: If delegate loading is used on unsupported platform. |
400 | 398 | Interpreter | tensorflow/tensorflow/lite/python/interpreter.py | 159 | class | Interpreter interface for TensorFlow Lite Models.
This makes the TensorFlow Lite interpreter accessible in Python.
It is possible to use this interpreter in a multithreaded Python environment,
but you must be sure to call functions of a particular instance from only
one thread at a time. So if you want to have 4 threads running different
inferences simultaneously, create an interpreter for each one as thread-local
data. Similarly, if you are calling invoke() in one thread on a single
interpreter but you want to use tensor() on another thread once it is done,
you must use a synchronization primitive between the threads to ensure invoke
has returned before calling tensor(). |
401 | 399 | allocate_tensors | tensorflow/tensorflow/lite/python/interpreter.py | 242 | method | |
402 | 400 | get_tensor_details | tensorflow/tensorflow/lite/python/interpreter.py | 365 | method | Gets tensor details for every tensor with valid tensor details.
Tensors where required information about the tensor is not found are not
added to the list. This includes temporary tensors without a name.
Returns:
A list of dictionaries containing tensor information. |
403 | 401 | get_input_details | tensorflow/tensorflow/lite/python/interpreter.py | 382 | method | Gets model input details.
Returns:
A list of input details. |
404 | 402 | set_tensor | tensorflow/tensorflow/lite/python/interpreter.py | 392 | method | Sets the value of the input tensor.
Note this copies data in `value`.
If you want to avoid copying, you can use the `tensor()` function to get a
numpy buffer pointing to the input buffer in the tflite interpreter.
Args:
tensor_index: Tensor index of tensor to set. This value can be gotten from
the 'index' field in get_input_details.
value: Value of tensor to set.
Raises:
ValueError: If the interpreter could not set the tensor. |
405 | 403 | resize_tensor_input | tensorflow/tensorflow/lite/python/interpreter.py | 410 | method | Resizes an input tensor.
```
interpreter = Interpreter(model_content=tflite_model)
interpreter.resize_tensor_input(0, [1, 224, 224, 3], strict=True)
interpreter.allocate_tensors()
interpreter.invoke()
```
Args:
input_index: Tensor index of input to set. This value can be gotten from
the 'index' field in get_input_details.
tensor_size: The tensor_shape to resize the input to.
strict: Only unknown dimensions can be resized when `strict` is True.
Unknown dimensions are indicated as `-1` in the `shape_signature`
attribute of a given tensor. (default False)
Raises:
ValueError: If the interpreter could not resize the input tensor. |
406 | 404 | get_output_details | tensorflow/tensorflow/lite/python/interpreter.py | 437 | method | Gets model output details.
Returns:
A list of output details. |
407 | 405 | get_tensor | tensorflow/tensorflow/lite/python/interpreter.py | 447 | method | Gets the value of the input tensor (get a copy).
If you wish to avoid the copy, use `tensor()`. This function cannot be used
to read intermediate results.
Args:
tensor_index: Tensor index of tensor to get. This value can be gotten from
the 'index' field in get_output_details.
Returns:
a numpy array. |
408 | 406 | tensor | tensorflow/tensorflow/lite/python/interpreter.py | 462 | method | Returns function that gives a numpy view of the current tensor buffer.
This allows reading and writing to this tensors w/o copies. This more
closely mirrors the C++ Interpreter class interface's tensor() member, hence
the name. Be careful to not hold these output references through calls
to `allocate_tensors()` and `invoke()`. This function cannot be used to read
intermediate results.
Usage:
```
interpreter.allocate_tensors()
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
for i in range(10):
input().fill(3.)
interpreter.invoke()
print("inference %s" % output())
```
Notice how this function avoids making a numpy array directly. This is
because it is important to not hold actual numpy views to the data longer
than necessary. If you do, then the interpreter can no longer be invoked,
because it is possible the interpreter would resize and invalidate the
referenced tensors. The NumPy API doesn't allow any mutability of the
the underlying buffers.
WRONG:
```
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])()
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])()
interpreter.allocate_tensors() # This will throw RuntimeError
for i in range(10):
input.fill(3.)
interpreter.invoke() # this will throw RuntimeError since input,output
```
Args:
tensor_index: Tensor index of tensor to get. This value can be gotten from
the 'index' field in get_output_details.
Returns:
A function that can return a new numpy array pointing to the internal
TFLite tensor state at any point. It is safe to hold the function forever,
but it is not safe to hold the numpy array forever. |
409 | 407 | invoke | tensorflow/tensorflow/lite/python/interpreter.py | 512 | method | Invoke the interpreter.
Be sure to set the input sizes, allocate tensors and fill values before
calling this. Also, note that this function releases the GIL so heavy
computation can be done in the background while the Python interpreter
continues. No other function on this object should be called while the
invoke() call has not finished.
Raises:
ValueError: When the underlying interpreter fails raise ValueError. |
410 | 408 | reset_all_variables | tensorflow/tensorflow/lite/python/interpreter.py | 527 | method | |
411 | 409 | InterpreterWithCustomOps | tensorflow/tensorflow/lite/python/interpreter.py | 552 | class | Interpreter interface for TensorFlow Lite Models that accepts custom ops.
The interface provided by this class is experimental and therefore not exposed
as part of the public API.
Wraps the tf.lite.Interpreter class and adds the ability to load custom ops
by providing the names of functions that take a pointer to a BuiltinOpResolver
and add a custom op. |
412 | 410 | Optimize | tensorflow/tensorflow/lite/python/lite.py | 88 | class | Enum defining the optimizations to apply when generating tflite graphs.
Some optimizations may come at the cost of accuracy.
DEFAULT
Default optimization strategy.
Converter will do its best to improve size and latency based on the
information provided.
Enhanced optimizations are gained by providing a representative_dataset.
This is recommended, and is currently equivalent to the modes below.
Currently, weights will be quantized and if representative_dataset is
provided, activations for quantizable operations will also be quantized.
OPTIMIZE_FOR_SIZE
Deprecated. Does the same as DEFAULT.
OPTIMIZE_FOR_LATENCY
Deprecated. Does the same as DEFAULT. |
413 | 411 | RepresentativeDataset | tensorflow/tensorflow/lite/python/lite.py | 131 | class | Representative dataset to evaluate optimizations.
A representative dataset that can be used to evaluate optimizations by the
converter. E.g. converter can use these examples to estimate (min, max) ranges
by calibrating the model on inputs. This can allow converter to quantize a
converted floating point model. |
414 | 412 | TargetSpec | tensorflow/tensorflow/lite/python/lite.py | 153 | class | Specification of target device.
Details about target device. Converter optimizes the generated model for
specific device.
Attributes:
supported_ops: Experimental flag, subject to change. Set of OpsSet options
supported by the device. (default set([OpsSet.TFLITE_BUILTINS]))
supported_types: List of types for constant values on the target device.
Supported values are types exported by lite.constants. Frequently, an
optimization choice is driven by the most compact (i.e. smallest) type in
this list (default [constants.FLOAT]) |
415 | 413 | QuantizationMode | tensorflow/tensorflow/lite/python/lite.py | 177 | class | QuantizationMode determines the quantized conversion from user options. |
416 | 414 | post_training_int8_no_float | tensorflow/tensorflow/lite/python/lite.py | 189 | method | Post training int8 quantize, disallow float fallback. |
417 | 415 | post_training_int8_allow_float | tensorflow/tensorflow/lite/python/lite.py | 195 | method | Post training int8 quantize, allow float fallback. |
418 | 416 | is_post_training_integer_quantize | tensorflow/tensorflow/lite/python/lite.py | 202 | method | Post training integer quantization. |
419 | 417 | training_time_int8_allow_float | tensorflow/tensorflow/lite/python/lite.py | 207 | method | Training-time int8 quantize, allow float fallback. |
420 | 418 | post_training_int16x8_no_float | tensorflow/tensorflow/lite/python/lite.py | 213 | method | Post training int16x8 quantize, disallow float fallback. |
421 | 419 | post_training_int16x8_allow_float | tensorflow/tensorflow/lite/python/lite.py | 220 | method | Post training int16x8 quantize, allow float fallback. |
422 | 420 | post_training_dynamic_range_int8 | tensorflow/tensorflow/lite/python/lite.py | 224 | method | Post training int8 const, on-the-fly int8 quantize of dynamic tensors. |
423 | 421 | post_training_fp16 | tensorflow/tensorflow/lite/python/lite.py | 233 | method | Post training fp16 quantize. |
424 | 422 | fp32_execution | tensorflow/tensorflow/lite/python/lite.py | 238 | method | If none of the above are true. |
425 | 423 | activations_type | tensorflow/tensorflow/lite/python/lite.py | 248 | method | |
426 | 424 | converter_flags | tensorflow/tensorflow/lite/python/lite.py | 252 | method | Flags to the converter. |
427 | 425 | quantizer_flags | tensorflow/tensorflow/lite/python/lite.py | 292 | method | Default flags to the TFMOT quantizer. |
428 | 426 | contains_training_quant_op | tensorflow/tensorflow/lite/python/lite.py | 371 | method | Checks if the graph contains any training-time quantization ops. |
429 | 427 | TFLiteConverterBase | tensorflow/tensorflow/lite/python/lite.py | 384 | class | Converter subclass to share functionality between V1 and V2 converters. |
430 | 428 | TFLiteConverterBaseV2 | tensorflow/tensorflow/lite/python/lite.py | 522 | class | Converter subclass to share functionality between V2 converters.
Attributes:
allow_custom_ops: Boolean indicating whether to allow custom operations.
When False, any unknown operation is an error. When True, custom ops are
created for any op that is unknown. The developer needs to provide these
to the TensorFlow Lite runtime with a custom resolver. (default False)
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use the
dataset to evaluate different optimizations. Note that this is an optional
attribute but it is necessary if INT8 is the only support builtin ops in
target ops.
target_spec: Experimental flag, subject to change. Specification of target
device.
inference_input_type: Data type of the input layer. Note that integer types
(tf.int8 and tf.uint8) are currently only supported for post training
integer quantization. (default tf.float32, must be in {tf.float32,
tf.int8, tf.uint8})
inference_output_type: Data type of the output layer. Note that integer
types (tf.int8 and tf.uint8) are currently only supported for post
training integer quantization. (default tf.float32, must be in
{tf.float32, tf.int8, tf.uint8})
experimental_new_converter: Experimental flag, subject to change. Enables
MLIR-based conversion instead of TOCO conversion. (default True) |
431 | 429 | convert | tensorflow/tensorflow/lite/python/lite.py | 574 | method | Converts a TensorFlow GraphDef based on instance variables.
Args:
graph_def: Frozen TensorFlow GraphDef.
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
Returns:
The converted data in serialized format.
Raises:
ValueError:
No concrete functions is specified.
Multiple concrete functions are specified.
Input shape is not specified.
Invalid quantization parameters. |
432 | 430 | TFLiteSavedModelConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 652 | class | Converts the given SavedModel into TensorFlow Lite model.
Attributes:
saved_model_dir: Directory of the SavedModel. |
433 | 431 | convert | tensorflow/tensorflow/lite/python/lite.py | 686 | method | Converts a TensorFlow GraphDef based on instance variables.
Returns:
The converted data in serialized format.
Raises:
ValueError:
No concrete functions is specified.
Multiple concrete functions are specified.
Input shape is not specified.
Invalid quantization parameters. |
434 | 432 | TFLiteKerasModelConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 719 | class | Converts the given Keras model into TensorFlow Lite model. |
435 | 433 | convert | tensorflow/tensorflow/lite/python/lite.py | 781 | method | Converts a keras model based on instance variables.
Returns:
The converted data in serialized format.
Raises:
ValueError:
Multiple concrete functions are specified.
Input shape is not specified.
Invalid quantization parameters. |
436 | 434 | TFLiteFrozenGraphConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 840 | class | Converts the given frozen graph into TensorFlow Lite model. |
437 | 435 | convert | tensorflow/tensorflow/lite/python/lite.py | 859 | method | Converts a TensorFlow GraphDef based on instance variables.
Returns:
The converted data in serialized format.
Raises:
ValueError:
No concrete functions is specified.
Multiple concrete functions are specified.
Input shape is not specified.
Invalid quantization parameters. |
438 | 436 | TFLiteConverterV2 | tensorflow/tensorflow/lite/python/lite.py | 910 | class | Converts a TensorFlow model into TensorFlow Lite model.
Attributes:
allow_custom_ops: Boolean indicating whether to allow custom operations.
When False, any unknown operation is an error. When True, custom ops are
created for any op that is unknown. The developer needs to provide these
to the TensorFlow Lite runtime with a custom resolver. (default False)
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use the
dataset to evaluate different optimizations. Note that this is an optional
attribute but it is necessary if INT8 is the only support builtin ops in
target ops.
target_spec: Experimental flag, subject to change. Specification of target
device.
inference_input_type: Data type of the input layer. Note that integer types
(tf.int8 and tf.uint8) are currently only supported for post training
integer quantization. (default tf.float32, must be in {tf.float32,
tf.int8, tf.uint8})
inference_output_type: Data type of the output layer. Note that integer
types (tf.int8 and tf.uint8) are currently only supported for post
training integer quantization. (default tf.float32, must be in
{tf.float32, tf.int8, tf.uint8})
experimental_new_converter: Experimental flag, subject to change. Enables
MLIR-based conversion instead of TOCO conversion. (default True)
Example usage:
```python
# Converting a SavedModel to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
# Converting a tf.Keras model to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Converting ConcreteFunctions to a TensorFlow Lite model.
converter = tf.lite.TFLiteConverter.from_concrete_functions([func])
tflite_model = converter.convert()
``` |
439 | 437 | from_concrete_functions | tensorflow/tensorflow/lite/python/lite.py | 971 | method | Creates a TFLiteConverter object from ConcreteFunctions.
Args:
funcs: List of TensorFlow ConcreteFunctions. The list should not contain
duplicate elements. Currently converter can only convert a single
ConcreteFunction. Converting multiple functions is under development.
Returns:
TFLiteConverter object.
Raises:
Invalid input type. |
440 | 438 | from_saved_model | tensorflow/tensorflow/lite/python/lite.py | 995 | method | Creates a TFLiteConverter object from a SavedModel directory.
Args:
saved_model_dir: SavedModel directory to convert.
signature_keys: List of keys identifying SignatureDef containing inputs
and outputs. Elements should not be duplicated. By default the
`signatures` attribute of the MetaGraphdef is used. (default
saved_model.signatures)
tags: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present. (default set(SERVING))
Returns:
TFLiteConverter object.
Raises:
Invalid signature keys. |
441 | 439 | from_keras_model | tensorflow/tensorflow/lite/python/lite.py | 1057 | method | Creates a TFLiteConverter object from a Keras model.
Args:
model: tf.Keras.Model
Returns:
TFLiteConverter object. |
442 | 440 | convert | tensorflow/tensorflow/lite/python/lite.py | 1069 | method | Converts a TensorFlow GraphDef based on instance variables.
Returns:
The converted data in serialized format.
Raises:
ValueError:
No concrete functions is specified.
Multiple concrete functions are specified.
Input shape is not specified.
Invalid quantization parameters. |
443 | 441 | TFLiteConverterBaseV1 | tensorflow/tensorflow/lite/python/lite.py | 1085 | class | Converter subclass to share functionality between V1 converters.
Attributes:
inference_type: Target data type of real-number arrays in the output file.
Must be `{tf.float32, tf.uint8}`. If `optimzations` are provided, this
parameter is ignored. (default tf.float32)
inference_input_type: Target data type of real-number input arrays. Allows
for a different type for input arrays. If an integer type is provided and
`optimizations` are not used, `quantized_inputs_stats` must be provided.
If `inference_type` is tf.uint8, signaling conversion to a fully quantized
model from a quantization-aware trained input model, then
`inference_input_type` defaults to tf.uint8. In all other cases,
`inference_input_type` defaults to tf.float32. Must be `{tf.float32,
tf.uint8, tf.int8}`
inference_output_type: Target data type of real-number output arrays. Allows
for a different type for output arrays. If `inference_type` is tf.uint8,
signaling conversion to a fully quantized model from a quantization-aware
trained output model, then `inference_output_type` defaults to tf.uint8.
In all other cases, `inference_output_type` must be tf.float32, an error
will be thrown otherwise. Must be `{tf.float32, tf.uint8, tf.int8}`
output_format: Output file format. Currently must be `{TFLITE,
GRAPHVIZ_DOT}`. (default TFLITE)
quantized_input_stats: Dict of strings representing input tensor names
mapped to tuple of floats representing the mean and standard deviation
of the training data (e.g., {"foo" : (0., 1.)}). Only need if
`inference_input_type` is `QUANTIZED_UINT8`. real_input_value =
(quantized_input_value - mean_value) / std_dev_value. (default {})
default_ranges_stats: Tuple of integers representing (min, max) range values
for all arrays without a specified range. Intended for experimenting with
quantization via "dummy quantization". (default None)
drop_control_dependency: Boolean indicating whether to drop control
dependencies silently. This is due to TFLite not supporting control
dependencies. (default True)
reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant
nodes in unexpected locations. Used when the location of the FakeQuant
nodes is preventing graph transformations necessary to convert the graph.
Results in a graph that differs from the quantized training graph,
potentially causing differing arithmetic behavior. (default False)
change_concat_input_ranges: Boolean to change behavior of min/max ranges for
inputs and outputs of the concat operator for quantized models. Changes
the ranges of concat operator overlap when true. (default False)
allow_custom_ops: Boolean indicating whether to allow custom operations.
When false any unknown operation is an error. When true, custom ops are
created for any op that is unknown. The developer will need to provide
these to the TensorFlow Lite runtime with a custom resolver. (default
False)
post_training_quantize: Deprecated. Please specify `[Optimize.DEFAULT]` for
`optimizations` instead. Boolean indicating whether to quantize the
weights of the converted float model. Model size will be reduced and
there will be latency improvements (at the cost of accuracy). (default
False)
dump_graphviz_dir: Full filepath of folder to dump the graphs at various
stages of processing GraphViz .dot files. Preferred over
--output_format=GRAPHVIZ_DOT in order to keep the requirements of the
output file. (default None)
dump_graphviz_video: Boolean indicating whether to dump the graph after
every graph transformation. (default False)
conversion_summary_dir: A string indicating the path to the generated
conversion logs.
target_ops: Deprecated. Please specify `target_spec.supported_ops` instead.
Set of OpsSet options indicating which converter to use. (default
set([OpsSet.TFLITE_BUILTINS]))
target_spec: Experimental flag, subject to change. Specification of target
device.
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use the
dataset to evaluate different optimizations.
experimental_new_converter: Experimental flag, subject to change. Enables
MLIR-based conversion instead of TOCO conversion. (default True) |
444 | 442 | convert | tensorflow/tensorflow/lite/python/lite.py | 1226 | method | Converts a TensorFlow GraphDef based on instance variables.
Returns:
The converted data in serialized format. Either a TFLite Flatbuffer or a
Graphviz graph depending on value in `output_format`.
Raises:
ValueError:
Input shape is not specified.
None value for dimension in input_tensor. |
445 | 443 | get_input_arrays | tensorflow/tensorflow/lite/python/lite.py | 1352 | method | Returns a list of the names of the input tensors.
Returns:
List of strings. |
446 | 444 | TFLiteSavedModelConverter | tensorflow/tensorflow/lite/python/lite.py | 1410 | class | Converts the given SavedModel into TensorFlow Lite model.
Attributes:
saved_model_dir: Directory of the SavedModel. |
447 | 445 | TFLiteKerasModelConverter | tensorflow/tensorflow/lite/python/lite.py | 1458 | class | Converts the given SavedModel into TensorFlow Lite model. |
448 | 446 | convert | tensorflow/tensorflow/lite/python/lite.py | 1567 | method | Converts a Keras model based on instance variables.
Returns:
The converted data in serialized format. Either a TFLite Flatbuffer or a
Graphviz graph depending on value in `output_format`.
Raises:
ValueError:
Input shape is not specified.
None value for dimension in input_tensor. |
449 | 447 | TFLiteFrozenGraphConverter | tensorflow/tensorflow/lite/python/lite.py | 1586 | class | Converts the given frozen graph def into TensorFlow Lite model. |
450 | 448 | TFLiteConverter | tensorflow/tensorflow/lite/python/lite.py | 1634 | class | Convert a TensorFlow model into `output_format`.
This is used to convert from a TensorFlow GraphDef, SavedModel or tf.keras
model into either a TFLite FlatBuffer or graph visualization.
Attributes:
inference_type: Target data type of real-number arrays in the output file.
Must be `{tf.float32, tf.uint8}`. If `optimzations` are provided, this
parameter is ignored. (default tf.float32)
inference_input_type: Target data type of real-number input arrays. Allows
for a different type for input arrays.
If an integer type is provided and `optimizations` are not used,
`quantized_inputs_stats` must be provided.
If `inference_type` is tf.uint8, signaling conversion to a fully quantized
model from a quantization-aware trained input model, then
`inference_input_type` defaults to tf.uint8.
In all other cases, `inference_input_type` defaults to tf.float32.
Must be `{tf.float32, tf.uint8, tf.int8}`
inference_output_type: Target data type of real-number output arrays. Allows
for a different type for output arrays.
If `inference_type` is tf.uint8, signaling conversion to a fully quantized
model from a quantization-aware trained output model, then
`inference_output_type` defaults to tf.uint8.
In all other cases, `inference_output_type` must be tf.float32, an error
will be thrown otherwise.
Must be `{tf.float32, tf.uint8, tf.int8}`
output_format: Output file format. Currently must be `{TFLITE,
GRAPHVIZ_DOT}`. (default TFLITE)
quantized_input_stats: Dict of strings representing input tensor names
mapped to tuple of floats representing the mean and standard deviation
of the training data (e.g., {"foo" : (0., 1.)}). Only need if
`inference_input_type` is `QUANTIZED_UINT8`.
real_input_value = (quantized_input_value - mean_value) / std_dev_value.
(default {})
default_ranges_stats: Tuple of integers representing (min, max) range values
for all arrays without a specified range. Intended for experimenting with
quantization via "dummy quantization". (default None)
drop_control_dependency: Boolean indicating whether to drop control
dependencies silently. This is due to TFLite not supporting control
dependencies. (default True)
reorder_across_fake_quant: Boolean indicating whether to reorder FakeQuant
nodes in unexpected locations. Used when the location of the FakeQuant
nodes is preventing graph transformations necessary to convert the graph.
Results in a graph that differs from the quantized training graph,
potentially causing differing arithmetic behavior. (default False)
change_concat_input_ranges: Boolean to change behavior of min/max ranges for
inputs and outputs of the concat operator for quantized models. Changes
the ranges of concat operator overlap when true. (default False)
allow_custom_ops: Boolean indicating whether to allow custom operations.
When false any unknown operation is an error. When true, custom ops are
created for any op that is unknown. The developer will need to provide
these to the TensorFlow Lite runtime with a custom resolver.
(default False)
post_training_quantize: Deprecated. Please specify `[Optimize.DEFAULT]` for
`optimizations` instead. Boolean indicating whether to quantize the
weights of the converted float model. Model size will be reduced and
there will be latency improvements (at the cost of accuracy).
(default False)
dump_graphviz_dir: Full filepath of folder to dump the graphs at various
stages of processing GraphViz .dot files. Preferred over
--output_format=GRAPHVIZ_DOT in order to keep the requirements of the
output file. (default None)
dump_graphviz_video: Boolean indicating whether to dump the graph after
every graph transformation. (default False)
conversion_summary_dir: A string indicating the path to the generated
conversion logs.
target_ops: Deprecated. Please specify `target_spec.supported_ops` instead.
Set of OpsSet options indicating which converter to use.
(default set([OpsSet.TFLITE_BUILTINS]))
target_spec: Experimental flag, subject to change. Specification of target
device.
optimizations: Experimental flag, subject to change. A list of optimizations
to apply when converting the model. E.g. `[Optimize.DEFAULT]`
representative_dataset: A representative dataset that can be used to
generate input and output samples for the model. The converter can use
the dataset to evaluate different optimizations.
experimental_new_converter: Experimental flag, subject to change.
Enables MLIR-based conversion instead of TOCO conversion. (default True)
Example usage:
```python
# Converting a GraphDef from session.
converter = tf.compat.v1.TFLiteConverter.from_session(
sess, in_tensors, out_tensors)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a GraphDef from file.
converter = tf.compat.v1.TFLiteConverter.from_frozen_graph(
graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a SavedModel.
converter = tf.compat.v1.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
# Converting a tf.keras model.
converter = tf.compat.v1.TFLiteConverter.from_keras_model_file(keras_model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
``` |
451 | 449 | from_session | tensorflow/tensorflow/lite/python/lite.py | 1776 | method | Creates a TFLiteConverter class from a TensorFlow Session.
Args:
sess: TensorFlow Session.
input_tensors: List of input tensors. Type and shape are computed using
`foo.shape` and `foo.dtype`.
output_tensors: List of output tensors (only .name is used from this).
Returns:
TFLiteConverter class. |
452 | 450 | from_frozen_graph | tensorflow/tensorflow/lite/python/lite.py | 1796 | method | Creates a TFLiteConverter class from a file containing a frozen GraphDef.
Args:
graph_def_file: Full filepath of file containing frozen GraphDef.
input_arrays: List of input tensors to freeze graph with.
output_arrays: List of output tensors to freeze graph with.
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" :
None}). (default None)
Returns:
TFLiteConverter class.
Raises:
IOError:
File not found.
Unable to parse input file.
ValueError:
The graph is not frozen.
input_arrays or output_arrays contains an invalid tensor name.
input_shapes is not correctly defined when required |
453 | 451 | from_saved_model | tensorflow/tensorflow/lite/python/lite.py | 1889 | method | Creates a TFLiteConverter class from a SavedModel.
Args:
saved_model_dir: SavedModel directory to convert.
input_arrays: List of input tensors to freeze graph with. Uses input
arrays from SignatureDef when none are provided. (default None)
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" :
None}). (default None)
output_arrays: List of output tensors to freeze graph with. Uses output
arrays from SignatureDef when none are provided. (default None)
tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present. (default set("serve"))
signature_key: Key identifying SignatureDef containing inputs and outputs.
(default DEFAULT_SERVING_SIGNATURE_DEF_KEY)
Returns:
TFLiteConverter class. |
454 | 452 | from_keras_model_file | tensorflow/tensorflow/lite/python/lite.py | 1936 | method | Creates a TFLiteConverter class from a tf.keras model file.
Args:
model_file: Full filepath of HDF5 file containing the tf.keras model.
input_arrays: List of input tensors to freeze graph with. Uses input
arrays from SignatureDef when none are provided. (default None)
input_shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo" : [1, 16, 16, 3]}).
Automatically determined when input shapes is None (e.g., {"foo" :
None}). (default None)
output_arrays: List of output tensors to freeze graph with. Uses output
arrays from SignatureDef when none are provided. (default None)
custom_objects: Dict mapping names (strings) to custom classes or
functions to be considered during model deserialization. (default None)
Returns:
TFLiteConverter class. |
455 | 453 | convert | tensorflow/tensorflow/lite/python/lite.py | 1964 | method | Converts a TensorFlow GraphDef based on instance variables.
Returns:
The converted data in serialized format. Either a TFLite Flatbuffer or a
Graphviz graph depending on value in `output_format`.
Raises:
ValueError:
Input shape is not specified.
None value for dimension in input_tensor. |
456 | 454 | TocoConverter | tensorflow/tensorflow/lite/python/lite.py | 1979 | class | Convert a TensorFlow model into `output_format` using TOCO.
This class has been deprecated. Please use `lite.TFLiteConverter` instead. |
457 | 455 | from_session | tensorflow/tensorflow/lite/python/lite.py | 1988 | method | Creates a TocoConverter class from a TensorFlow Session. |
458 | 456 | from_frozen_graph | tensorflow/tensorflow/lite/python/lite.py | 1995 | method | Creates a TocoConverter class from a file containing a frozen graph. |
459 | 457 | from_saved_model | tensorflow/tensorflow/lite/python/lite.py | 2007 | method | Creates a TocoConverter class from a SavedModel. |
460 | 458 | from_keras_model_file | tensorflow/tensorflow/lite/python/lite.py | 2022 | method | Creates a TocoConverter class from a tf.keras model file. |
461 | 459 | FromConstructor | tensorflow/tensorflow/lite/python/lite_test.py | 76 | class | |
462 | 460 | FromFrozenGraphFile | tensorflow/tensorflow/lite/python/lite_test.py | 1464 | class | |
463 | 461 | FromFrozenGraphObjectDetection | tensorflow/tensorflow/lite/python/lite_test.py | 1650 | class | |
464 | 462 | MyAddLayer | tensorflow/tensorflow/lite/python/lite_test.py | 1897 | class | |
465 | 463 | call | tensorflow/tensorflow/lite/python/lite_test.py | 1903 | method | |
466 | 464 | get_config | tensorflow/tensorflow/lite/python/lite_test.py | 1906 | method | |
467 | 465 | FromKerasFile | tensorflow/tensorflow/lite/python/lite_test.py | 1912 | class | |
468 | 466 | setUp | tensorflow/tensorflow/lite/python/lite_test.py | 1914 | method | |
469 | 467 | tearDown | tensorflow/tensorflow/lite/python/lite_test.py | 1921 | method | |
470 | 468 | UnknownShapes | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1047 | class | |
471 | 469 | model | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1056 | method | |
472 | 470 | model | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1077 | method | |
473 | 471 | calibration_gen | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1093 | method | |
474 | 472 | model | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1157 | method | |
475 | 473 | model | tensorflow/tensorflow/lite/python/lite_v2_test.py | 1178 | method | |
476 | 474 | OpHint | tensorflow/tensorflow/lite/python/op_hint.py | 97 | class | A class that helps build tflite function invocations.
It allows you to take a bunch of TensorFlow ops and annotate the construction
such that toco knows how to convert it to tflite. This embeds a pseudo
function in a TensorFlow graph. This allows embedding high-level API usage
information in a lower level TensorFlow implementation so that an alternative
implementation can be substituted later.
Essentially, any "input" into this pseudo op is fed into an identity, and
attributes are added to that input before being used by the constituent ops
that make up the pseudo op. A similar process is done to any output that
is to be exported from the current op. |
477 | 475 | add_input | tensorflow/tensorflow/lite/python/op_hint.py | 388 | method | Add a wrapped input argument to the hint.
Args:
*args: The input tensor.
**kwargs:
"name" label
"tag" a tag to group multiple arguments that will be aggregated. I.e.
a string like 'cool_input'. Basically multiple inputs can be added
to the same hint for parallel operations that will eventually be
combined. An example would be static_rnn which creates multiple copies
of state or inputs.
"aggregate" aggregation strategy that is valid only for tag non None.
Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST,
and OpHint.AGGREGATE_STACK.
"index_override" The global index to use. This corresponds to the
argument order in the final stub that will be generated.
Returns:
The wrapped input tensor. |
478 | 476 | add_output | tensorflow/tensorflow/lite/python/op_hint.py | 410 | method | Add a wrapped output argument to the hint.
Args:
*args: The output tensor.
**kwargs:
"name" label
"tag" a tag to group multiple arguments that will be aggregated. I.e.
a string like 'cool_input'. Basically multiple inputs can be added
to the same hint for parallel operations that will eventually be
combined. An example would be static_rnn which creates multiple copies
of state or inputs.
"aggregate" aggregation strategy that is valid only for tag non None.
Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST,
and OpHint.AGGREGATE_STACK.
"index_override" The global index to use. This corresponds to the
argument order in the final stub that will be generated.
Returns:
The wrapped output tensor. |
479 | 477 | add_inputs | tensorflow/tensorflow/lite/python/op_hint.py | 432 | method | Add a sequence of inputs to the function invocation.
Args:
*args: List of inputs to be converted (should be Tf.Tensor).
**kwargs: This allows 'names' which should be a list of names.
Returns:
Wrapped inputs (identity standins that have additional metadata). These
are also are also tf.Tensor's. |
480 | 478 | add_outputs | tensorflow/tensorflow/lite/python/op_hint.py | 451 | method | Add a sequence of outputs to the function invocation.
Args:
*args: List of outputs to be converted (should be tf.Tensor).
**kwargs: See
Returns:
Wrapped outputs (identity standins that have additional metadata). These
are also tf.Tensor's. |
481 | 479 | add | tensorflow/tensorflow/lite/python/op_hint.py | 229 | method | Return a wrapped tensor of an input tensor as an argument.
Args:
arg: A TensorFlow tensor that should be considered an argument.
tag: String tag to identify arguments that should be packed.
name: Name of argument. This is included in the Identity hint op names.
aggregate: Strategy to aggregate.
Acceptable values are OpHint.AGGREGATE_FIRST, OpHint.AGGREGATE_LAST,
and OpHint.AGGREGATE_STACK.
Note, aggregate is only valid if tag is specified.
index_override: Specify what input/output index should this be in the
final stub. i.e. add(arg0, index=1); add(arg1, index=0) will make the
final stub be as stub_func(inputs[arg1, arg0], outputs=[]) rather than
the default call order based ordering.
Returns:
A tensor representing the wrapped argument.
Raises:
ValueError: When indices are not consistent. |
482 | 480 | assert_dictlist_has_keys | tensorflow/tensorflow/lite/python/op_hint.py | 365 | method | |
483 | 481 | find_all_hinted_output_nodes | tensorflow/tensorflow/lite/python/op_hint.py | 1257 | function | Find all Ophints output nodes in the graph.
This is used to get all the output nodes those are ophinted, it is important
for operation like convert_variables_to_constants keep all ophints structure.
Note: only one of session or graph_def should be used, not both.
Why this can be useful? Some TensorFlow ops (e.g. bidirectional rnn), can
generate multiple outputs for unfused subgraph. If not all output nodes are
consumed, graph optimization can potentially drop the unused nodes and cause
ophints in an invalid states (due to missing ophinted output nodes). So it's
important for us to find all those hinted output nodes and make sure they're
not discarded away.
Args:
session: A TensorFlow session that contains the graph to convert.
graph_def: A graph def that we should convert.
Returns:
A list of OpHints output nodes.
Raises:
ValueError: If both session and graph_def are provided. |
484 | 482 | is_ophint_converted | tensorflow/tensorflow/lite/python/op_hint.py | 1292 | function | |
485 | 483 | convert_op_hints_to_stubs | tensorflow/tensorflow/lite/python/op_hint.py | 1305 | function | Converts a graphdef with LiteOp hints into stub operations.
This is used to prepare for toco conversion of complex intrinsic usages.
Note: only one of session or graph_def should be used, not both.
Args:
session: A TensorFlow session that contains the graph to convert.
graph_def: A graph def that we should convert.
write_callback: A function pointer that can be used to write intermediate
steps of graph transformation (optional).
Returns:
A new graphdef with all ops contained in OpHints being replaced by
a single op call with the right parameters.
Raises:
ValueError: If both session and graph_def are provided. |
486 | 484 | convert_dtype_to_tflite_type | tensorflow/tensorflow/lite/python/util.py | 59 | function | Converts tf.dtype to TFLite proto type.
Args:
tf_dtype: tf.dtype
Raises:
ValueError: Unsupported tf.dtype.
Returns:
types_flag_pb2. |
487 | 485 | get_tensor_name | tensorflow/tensorflow/lite/python/util.py | 77 | function | Returns name of the input tensor.
Args:
tensor: tf.Tensor
Returns:
str |
488 | 486 | get_tensors_from_tensor_names | tensorflow/tensorflow/lite/python/util.py | 98 | function | Gets the Tensors associated with the `tensor_names` in the provided graph.
Args:
graph: TensorFlow Graph.
tensor_names: List of strings that represent names of tensors in the graph.
Returns:
A list of Tensor objects in the same order the names are provided.
Raises:
ValueError:
tensor_names contains an invalid tensor name. |
489 | 487 | set_tensor_shapes | tensorflow/tensorflow/lite/python/util.py | 141 | function | Sets Tensor shape for each tensor if the shape is defined.
Args:
tensors: TensorFlow ops.Tensor.
shapes: Dict of strings representing input tensor names to list of
integers representing input shapes (e.g., {"foo": : [1, 16, 16, 3]}).
Raises:
ValueError:
`shapes` contains an invalid tensor.
`shapes` contains an invalid shape for a valid tensor. |
490 | 488 | get_grappler_config | tensorflow/tensorflow/lite/python/util.py | 172 | function | Creates a tf.compat.v1.ConfigProto for configuring Grappler.
Args:
optimizers_list: List of strings that represents the list of optimizers.
Returns:
tf.ConfigProto. |
491 | 489 | run_graph_optimizations | tensorflow/tensorflow/lite/python/util.py | 188 | function | Apply standard TensorFlow optimizations to the graph_def.
Args:
graph_def: Frozen GraphDef to be optimized.
input_arrays: List of arrays that are considered inputs of the graph.
output_arrays: List of arrays that are considered outputs of the graph.
config: tf.ConfigProto.
graph: TensorFlow Graph. Required when Eager mode is enabled. (default None)
Returns:
A new, optimized GraphDef. |
492 | 490 | freeze_graph | tensorflow/tensorflow/lite/python/util.py | 241 | function | Returns a frozen GraphDef.
Runs a Grappler pass and freezes a graph with Variables in it. Otherwise the
existing GraphDef is returned. The Grappler pass is only run on models that
are frozen in order to inline the functions in the graph.
If OpHints is present, it will try to convert the OpHint graph.
Args:
sess: TensorFlow Session.
input_tensors: List of input tensors.
output_tensors: List of output tensors (only .name is used from this).
Returns:
Frozen GraphDef. |
493 | 491 | is_frozen_graph | tensorflow/tensorflow/lite/python/util.py | 281 | function | Determines if the graph is frozen.
Determines if a graph has previously been frozen by checking for any
operations of type Variable*. If variables are found, the graph is not frozen.
Args:
sess: TensorFlow Session.
Returns:
Bool. |
494 | 492 | build_debug_info_func | tensorflow/tensorflow/lite/python/util.py | 300 | function | Returns a method to retrieve the `GraphDebugInfo` from the original graph.
Args:
original_graph: The original `Graph` containing all the op stack traces.
Returns:
A function which retrieves the stack traces from the original graph and
converts them to a `GraphDebugInfo` for a given set of nodes. |
495 | 493 | convert_debug_info_func | tensorflow/tensorflow/lite/python/util.py | 339 | function | Returns a method to retrieve the `GraphDebugInfo` from the original graph.
Args:
saved_debug_info: The `GraphDebugInfo` containing all the debug info.
Returns:
A function which retrieves the stack traces from the original graph and
converts them to a `GraphDebugInfo` for a given set of nodes. |
496 | 494 | get_debug_info | tensorflow/tensorflow/lite/python/util.py | 368 | function | Returns the debug info for the original nodes in the `converted_graph`.
Args:
nodes_to_debug_info_func: The method to collect the op debug info for the
nodes.
converted_graph: A `GraphDef` after optimization and transformation.
Returns:
`GraphDebugInfo` for all the original nodes in `converted_graph`. |
497 | 495 | convert_bytes_to_c_source | tensorflow/tensorflow/lite/python/util.py | 399 | function | Returns strings representing a C constant array containing `data`.
Args:
data: Byte array that will be converted into a C constant.
array_name: String to use as the variable name for the constant array.
max_line_width: The longest line length, for formatting purposes.
include_guard: Name to use for the include guard macro definition.
include_path: Optional path to include in the source file.
use_tensorflow_license: Whether to include the standard TensorFlow Apache2
license in the generated files.
Returns:
Text that can be compiled as a C source file to link in the data as a
literal array of values.
Text that can be used as a C header file to reference the literal array. |
498 | 496 | wrapped_toco_convert | tensorflow/tensorflow/lite/python/wrap_toco.py | 29 | function | Wraps TocoConvert with lazy loader. |
499 | 497 | wrapped_get_potentially_supported_ops | tensorflow/tensorflow/lite/python/wrap_toco.py | 41 | function | Wraps TocoGetPotentiallySupportedOps with lazy loader. |
500 | 498 | wrapped_experimental_mlir_quantize | tensorflow/tensorflow/lite/python/wrap_toco.py | 46 | function | Wraps experimental mlir quantize model. |
501 | 499 | wrapped_experimental_mlir_sparsify | tensorflow/tensorflow/lite/python/wrap_toco.py | 55 | function | Wraps experimental mlir sparsify model. |
502 | 500 | Calibrator | tensorflow/tensorflow/lite/python/optimize/calibrator.py | 33 | class | Calibrates a floating point model and then quantizes it.
This is an internal class, not a public interface. |
503 | 501 | calibrate_and_quantize | tensorflow/tensorflow/lite/python/optimize/calibrator.py | 58 | method | Calibrates the model with specified generator and then quantizes it.
The input shapes of the calibrator are resized with the calibration data if
`resize_input` is set.
Returns:
A quantized model.
Args:
dataset_gen: A generator that generates calibration samples.
input_type: A tf.dtype representing the desired real-value input type.
output_type: A tf.dtype representing the desired real-value output type.
allow_float: A boolean. False if the resulting model cannot perform float
computation, useful when targeting an integer-only backend.
If False, an error will be thrown if an operation cannot be
quantized, otherwise the model will fallback to float ops.
activations_type: A tf.dtype representing the desired type for
activations.
resize_input: A boolean. True if the shape of the sample data is different
from the input. |
504 | 502 | calibrate_and_quantize_single | tensorflow/tensorflow/lite/python/optimize/calibrator.py | 100 | method | Calibrates the model with specified generator and then quantizes it.
Only the single op with output op_output_name will be quantized.
The input shapes of the calibrator are resized with the calibration data.
Returns:
A quantized model.
Args:
dataset_gen: A generator that generates calibration samples.
input_type: A tf.dtype representing the desired real-value input type.
output_type: A tf.dtype representing the desired real-value output type.
allow_float: A boolean. False if the resulting model cannot perform float
computation, useful when targeting an integer-only backend. If False, an
error will be thrown if an operation cannot be quantized, otherwise the
model will fallback to float ops.
op_output_name: A string, only this op will be quantized.
resize_input: A boolean. True if the shape of the sample data is different
from the input. |
505 | 503 | calibrate | tensorflow/tensorflow/lite/python/optimize/calibrator.py | 140 | method | Calibrates the model with specified generator.
Returns:
A model with min and max calibration stats.
Args:
dataset_gen: A generator that generates calibration samples. |
506 | 504 | TemporaryDirectoryResource | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 57 | function | |
507 | 505 | Converter | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 65 | class | Converts TensorFlow flatbuffer models from old to new version of schema.
This can convert between any version to the latest version. It uses
an incremental upgrade strategy to go from version to version.
Usage:
converter = Converter()
converter.Convert("a.tflite", "a.json")
converter.Convert("b.json", "b.tflite") |
508 | 506 | Convert | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 309 | method | Perform schema conversion from input_file to output_file.
Args:
input_file: Filename of TensorFlow Lite data to convert from. Must
be `.json` or `.bin` extension files for JSON or Binary forms of
the TensorFlow FlatBuffer schema.
output_file: Filename to write to. Extension also must be `.json`
or `.bin`.
Raises:
RuntimeError: Generated when none of the upgrader supported schemas
matche the `input_file` data. |
509 | 507 | FindSchema | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 88 | method | |
510 | 508 | RemapOperator | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 209 | method | Go from old schema op name to new schema op name.
Args:
opcode_name: String representing the ops (see :schema.fbs).
Returns:
Converted opcode_name from V1 to V2. |
511 | 509 | RemapOperatorType | tensorflow/tensorflow/lite/schema/upgrade_schema.py | 232 | method | Remap operator structs from old names to new names.
Args:
operator_type: String representing the builtin operator data type
string.
(see :schema.fbs).
Raises:
ValueError: When the model has consistency problems.
Returns:
Upgraded builtin operator data type as a string. |
512 | 510 | JsonDumpAndFlush | tensorflow/tensorflow/lite/schema/upgrade_schema_test.py | 242 | function | Write the dictionary `data` to a JSON file `fp` (and flush).
Args:
data: in a dictionary that is JSON serializable.
fp: File-like object |
513 | 511 | MultiGenState | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 176 | class | State of multiple set generation process.
This state class stores the information needed when generating the examples
for multiple test set. The stored informations are open archive object to be
shared, information on test target for current iteration of generation,
accumulated generation results. |
514 | 512 | Options | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 203 | class | All options for example generation. |
515 | 513 | generate_examples | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 256 | function | Generate examples for a test set.
Args:
options: Options containing information to generate examples.
Raises:
RuntimeError: if the test function cannot be found. |
516 | 514 | generate_multi_set_examples | tensorflow/tensorflow/lite/testing/generate_examples_lib.py | 294 | function | Generate examples for test sets.
Args:
options: Options containing information to generate examples.
test_sets: List of the name of test sets to generate examples. |
517 | 515 | make_report_table | tensorflow/tensorflow/lite/testing/generate_examples_report.py | 32 | function | Make an HTML report of the success/failure reports.
Args:
fp: File-like object in which to put the html.
title: "Title of the zip file this pertains to."
reports: a list of conversion attempts. (report_args, report_vals) i.e.
({"shape": [1,2,3], "type": "tf.float32"},
{"tf": "SUCCESS", "toco": "FAILURE", "toco_log": "Unsupported type.",
"tf_log": ""}) |
518 | 516 | toco_options | tensorflow/tensorflow/lite/testing/toco_convert.py | 31 | function | Create TOCO options to process a model.
Args:
data_types: input and inference types used by TOCO.
input_arrays: names of the input tensors
output_arrays: name of the output tensors
shapes: shapes of the input tensors
extra_toco_options: additional toco options
Returns:
the options in a string. |
519 | 517 | toco_convert | tensorflow/tensorflow/lite/testing/toco_convert.py | 78 | function | Convert a model's graph def into a tflite model.
NOTE: this currently shells out to the toco binary, but we would like
convert to Python API tooling in the future.
Args:
options: An Options instance.
graph_def: A GraphDef object.
input_tensors: List of input tensor tuples `(name, shape, type)`.
output_tensors: List of output tensors (names).
**kwargs: Extra options to be passed.
Returns:
output tflite model, log_txt from conversion
or None, log_txt if it did not convert properly. |
520 | 518 | ExtraTocoOptions | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 88 | class | Additional toco options besides input, output, shape. |
521 | 519 | create_tensor_data | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 106 | function | Build tensor data spreading the range [min_value, max_value). |
522 | 520 | create_scalar_data | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 126 | function | Build scalar tensor data range from min_value to max_value exclusively. |
523 | 521 | freeze_graph | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 144 | function | Freeze the current graph.
Args:
session: Tensorflow sessions containing the graph
outputs: List of output tensors
Returns:
The frozen graph_def. |
524 | 522 | format_result | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 158 | function | Convert a tensor to a format that can be used in test specs. |
525 | 523 | write_examples | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 168 | function | Given a list `examples`, write a text format representation.
The file format is csv like with a simple repeated pattern. We would ike
to use proto here, but we can't yet due to interfacing with the Android
team using this format.
Args:
fp: File-like object to write to.
examples: Example dictionary consisting of keys "inputs" and "outputs" |
526 | 524 | get_input_shapes_map | tensorflow/tensorflow/lite/testing/zip_test_utils.py | 225 | function | Gets a map of input names to shapes.
Args:
input_tensors: List of input tensor tuples `(name, shape, type)`.
Returns:
{string : list of integers}. |
527 | 525 | get_filepath | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 47 | function | Returns the full path of the filename.
Args:
filename: Subdirectory and name of the model file.
base_dir: Base directory containing model file.
Returns:
str. |
528 | 526 | get_image | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 63 | function | Returns an image loaded into an np.ndarray with dims [1, size, size, 3].
Args:
size: Size of image.
Returns:
np.ndarray. |
529 | 527 | evaluate_frozen_graph | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 222 | function | Returns a function that evaluates the frozen graph on input data.
Args:
filename: Full filepath of file containing frozen GraphDef.
input_arrays: List of input tensors to freeze graph with.
output_arrays: List of output tensors to freeze graph with.
Returns:
Lambda function ([np.ndarray data] : [np.ndarray result]). |
530 | 528 | evaluate_saved_model | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 260 | function | Returns a function that evaluates the SavedModel on input data.
Args:
directory: SavedModel directory to convert.
tag_set: Set of tags identifying the MetaGraphDef within the SavedModel to
analyze. All tags in the tag set must be present.
signature_key: Key identifying SignatureDef containing inputs and outputs.
Returns:
Lambda function ([np.ndarray data] : [np.ndarray result]). |
531 | 529 | evaluate_keras_model | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 286 | function | Returns a function that evaluates the tf.keras model on input data.
Args:
filename: Full filepath of HDF5 file containing the tf.keras model.
Returns:
Lambda function ([np.ndarray data] : [np.ndarray result]). |
532 | 530 | compare_models | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 299 | function | Compares TensorFlow and TFLite models.
Unless the input data is provided, the models are compared with random data.
Args:
tflite_model: Serialized TensorFlow Lite model.
tf_eval_func: Lambda function that takes in input data and outputs the
results of the TensorFlow model ([np.ndarray data] : [np.ndarray result]).
input_shapes_resize: A map where the key is the input tensor name and the
value is the shape of the input tensor. This resize happens after model
conversion, prior to calling allocate tensors. (default None)
input_data: np.ndarray to pass into models during inference. (default None)
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
tolerance: Decimal place to check accuracy to. (default 5). |
533 | 531 | compare_models_v2 | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib.py | 336 | function | Compares TensorFlow and TFLite models for TensorFlow 2.0.
Unless the input data is provided, the models are compared with random data.
Currently only 1 input and 1 output are supported by this function.
Args:
tflite_model: Serialized TensorFlow Lite model.
tf_eval_func: Function to evaluate TensorFlow model. Either a lambda
function that takes in input data and outputs the results or a TensorFlow
ConcreteFunction.
input_data: np.ndarray to pass into models during inference. (default None).
input_data_range: A map where the key is the input tensor name and
the value is a tuple (min_val, max_val) which specifies the value range of
the corresponding input tensor. For example, '{'input1': (1, 5)}' means to
generate a random value for tensor `input1` within range [1.0, 5.0)
(half-inclusive). (default None)
tolerance: Decimal place to check accuracy to. (default 5) |
534 | 532 | EvaluateFrozenGraph | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py | 42 | class | |
535 | 533 | plus_placeholder | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py | 92 | method | |
536 | 534 | EvaluateSavedModel | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py | 142 | class | |
537 | 535 | EvaluateKerasModel | tensorflow/tensorflow/lite/testing/model_coverage/model_coverage_lib_test.py | 160 | class | |
538 | 536 | relu1 | tensorflow/tensorflow/lite/testing/op_tests/conv_activation.py | 143 | function | |
539 | 537 | make_l2_pool | tensorflow/tensorflow/lite/testing/op_tests/pool.py | 119 | function | Given an input perform a sequence of TensorFlow ops to produce l2pool. |
540 | 538 | html_escape | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 37 | function | |
541 | 539 | get_input_type_from_signature | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 41 | function | Parses op_signature and returns a string denoting the input tensor type.
Args:
op_signature: a string specifying the signature of a particular operator.
The signature of an operator contains the input tensor's shape and type,
output tensor's shape and type, operator's name and its version. It has
the following schema:
INPUT:input_1_shape::input_1_type::input_2_shape::input_2_type::..
::OUTPUT:output_1_shape::output_1_type::output_2_shape::output_2_type::
..::NAME:operator_name ::VERSION:operator_version
An example of an operator signature is:
INPUT:[1,73,73,160]::float::[64,1,1,160]::float::[64]::float::
OUTPUT:[1,73,73,64]::float::NAME:Conv::VERSION:1
Returns:
A string denoting the input tensors' type. In the form of shape/type
separated
by comma. For example:
shape:[1,73,73,160],type:float,shape:[64,1,1,160],type:float,shape:[64],
type:float |
542 | 540 | get_operator_type | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 78 | function | |
543 | 541 | HTMLGenerator | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 87 | class | Utility class to generate an HTML report. |
544 | 542 | generate | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 111 | method | Generates the HTML report and writes it to local directory.
This function uses the fields in `toco_conversion_log_before` and
`toco_conversion_log_after` to populate the HTML content. Certain markers
(placeholders) in the HTML template are then substituted with the fields
from the protos. Once finished it will write the HTML file to the specified
local file path.
Args:
toco_conversion_log_before: A `TocoConversionLog` protobuf generated
before the model is converted by TOCO.
toco_conversion_log_after: A `TocoConversionLog` protobuf generated after
the model is converted by TOCO.
post_training_quant_enabled: A boolean, whether post-training quantization
is enabled.
dot_before: A string, the dot representation of the model
before the conversion.
dot_after: A string, the dot representation of the model after
the conversion.
toco_err_log: A string, the logs emitted by TOCO during conversion. Caller
need to ensure that this string is properly anonymized (any kind of
user data should be eliminated).
tflite_graph_path: A string, the filepath to the converted TFLite model.
Raises:
RuntimeError: When error occurs while generating the template. |
545 | 543 | gen_conversion_log_html | tensorflow/tensorflow/lite/toco/logging/gen_html.py | 208 | function | Generates an HTML report about the conversion process.
Args:
conversion_log_dir: A string specifying the file directory of the conversion
logs. It's required that before calling this function, the
`conversion_log_dir`
already contains the following files: `toco_log_before.pb`,
`toco_log_after.pb`, `toco_tf_graph.dot`,
`toco_tflite_graph.dot`.
quantization_enabled: A boolean, passed from the tflite converter to
indicate whether post-training quantization is enabled during conversion.
tflite_graph_path: A string, the filepath to the converted TFLite model.
Raises:
IOError: When any of the required files doesn't exist. |
546 | 544 | execute | tensorflow/tensorflow/lite/toco/python/toco_from_protos.py | 32 | function | Runs the converter. |
547 | 545 | TensorName | tensorflow/tensorflow/lite/toco/python/toco_from_protos_test.py | 30 | function | Get the canonical (non foo:0 name). |
548 | 546 | get_image | tensorflow/tensorflow/lite/tools/convert_image_to_csv.py | 41 | function | Returns an image loaded into an np.ndarray with dims [height, width, (3 or 1)].
Args:
width: Width to rescale the image to.
height: Height to rescale the image to.
want_grayscale: Whether the result should be converted to grayscale.
filepath: Path of the image file..
Returns:
np.ndarray of shape (height, width, channels) where channels is 1 if
want_grayscale is true, otherwise 3. |
549 | 547 | array_to_int_csv | tensorflow/tensorflow/lite/tools/convert_image_to_csv.py | 65 | function | Converts all elements in a numerical array to a comma-separated string.
Args:
array_data: Numerical array to convert.
Returns:
String containing array values as integers, separated by commas. |
550 | 548 | convert_bytearray_to_object | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 38 | function | Converts a tflite model from a bytearray to an object for parsing. |
551 | 549 | read_model | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 44 | function | Reads a tflite model as a python object.
Args:
input_tflite_file: Full path name to the input tflite file
Raises:
RuntimeError: If input_tflite_file path is invalid.
IOError: If input_tflite_file cannot be opened.
Returns:
A python object corresponding to the input tflite file. |
552 | 550 | read_model_with_mutable_tensors | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 64 | function | Reads a tflite model as a python object with mutable tensors.
Similar to read_model() with the addition that the returned object has
mutable tensors (read_model() returns an object with immutable tensors).
Args:
input_tflite_file: Full path name to the input tflite file
Raises:
RuntimeError: If input_tflite_file path is invalid.
IOError: If input_tflite_file cannot be opened.
Returns:
A mutable python object corresponding to the input tflite file. |
553 | 551 | convert_object_to_bytearray | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 83 | function | Converts a tflite model from an object to a bytearray. |
554 | 552 | write_model | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 93 | function | Writes the tflite model, a python object, into the output file.
Args:
model_object: A tflite model as a python object
output_tflite_file: Full path name to the output tflite file.
Raises:
IOError: If output_tflite_file path is invalid or cannot be opened. |
555 | 553 | strip_strings | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 108 | function | Strips all nonessential strings from the model to reduce model size.
We remove the following strings:
(find strings by searching ":string" in the tensorflow lite flatbuffer schema)
1. Model description
2. SubGraph name
3. Tensor names
We retain OperatorCode custom_code and Metadata name.
Args:
model: The model from which to remove nonessential strings. |
556 | 554 | randomize_weights | tensorflow/tensorflow/lite/tools/flatbuffer_utils.py | 130 | function | Randomize weights in a model.
Args:
model: The model in which to randomize weights.
random_seed: The input to the random number generator (default value is 0). |
557 | 555 | build_mock_flatbuffer_model | tensorflow/tensorflow/lite/tools/test_utils.py | 30 | function | Creates a flatbuffer containing an example model. |
558 | 556 | load_model_from_flatbuffer | tensorflow/tensorflow/lite/tools/test_utils.py | 211 | function | Loads a model as a python object from a flatbuffer model. |
559 | 557 | build_mock_model | tensorflow/tensorflow/lite/tools/test_utils.py | 218 | function | Creates an object containing an example model. |
560 | 558 | TensorTypeToName | tensorflow/tensorflow/lite/tools/visualize.py | 202 | function | Converts a numerical enum to a readable tensor type. |
561 | 559 | BuiltinCodeToName | tensorflow/tensorflow/lite/tools/visualize.py | 210 | function | Converts a builtin op code enum to a readable name. |
562 | 560 | NameListToString | tensorflow/tensorflow/lite/tools/visualize.py | 218 | function | Converts a list of integers to the equivalent ASCII string. |
563 | 561 | OpCodeMapper | tensorflow/tensorflow/lite/tools/visualize.py | 229 | class | Maps an opcode index to an op name. |
564 | 562 | DataSizeMapper | tensorflow/tensorflow/lite/tools/visualize.py | 245 | class | For buffers, report the number of bytes. |
565 | 563 | TensorMapper | tensorflow/tensorflow/lite/tools/visualize.py | 255 | class | Maps a list of tensor indices to a tooltip hoverable indicator of more. |
566 | 564 | GenerateGraph | tensorflow/tensorflow/lite/tools/visualize.py | 278 | function | Produces the HTML required to have a d3 visualization of the dag. |
567 | 565 | GenerateTableHtml | tensorflow/tensorflow/lite/tools/visualize.py | 337 | function | Given a list of object values and keys to print, make an HTML table.
Args:
items: Items to print an array of dicts.
keys_to_print: (key, display_fn). `key` is a key in the object. i.e.
items[0][key] should exist. display_fn is the mapping function on display.
i.e. the displayed html cell will have the string returned by
`mapping_fn(items[0][key])`.
display_index: add a column which is the index of each row in `items`.
Returns:
An html table. |
568 | 566 | CamelCaseToSnakeCase | tensorflow/tensorflow/lite/tools/visualize.py | 375 | function | Converts an identifier in CamelCase to snake_case. |
569 | 567 | FlatbufferToDict | tensorflow/tensorflow/lite/tools/visualize.py | 381 | function | Converts a hierarchy of FB objects into a nested dict.
We avoid transforming big parts of the flat buffer into python arrays. This
speeds conversion from ten minutes to a few seconds on big graphs.
Args:
fb: a flat buffer structure. (i.e. ModelT)
preserve_as_numpy: true if all downstream np.arrays should be preserved.
false if all downstream np.array should become python arrays
Returns:
A dictionary representing the flatbuffer rather than a flatbuffer object. |
570 | 568 | CreateDictFromFlatbuffer | tensorflow/tensorflow/lite/tools/visualize.py | 413 | function | |
571 | 569 | CreateHtmlFile | tensorflow/tensorflow/lite/tools/visualize.py | 419 | function | Given a tflite model in `tflite_input` file, produce html description. |
572 | 570 | modify_model_interface | tensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib.py | 52 | function | Modify a quantized model's interface (input/output) from float to integer.
Args:
input_file: Full path name to the input tflite file.
output_file: Full path name to the output tflite file.
input_type: Final input interface type.
output_type: Final output interface type.
Raises:
RuntimeError: If the modification of the model interface was unsuccessful.
ValueError: If the input_type or output_type is unsupported. |
573 | 571 | build_tflite_model_with_full_integer_quantization | tensorflow/tensorflow/lite/tools/optimize/python/modify_model_interface_lib_test.py | 31 | function | |
574 | 572 | get_build_cpus | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 72 | function | |
575 | 573 | make_args | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 80 | function | Construct make command line. |
576 | 574 | make_output | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 94 | function | Invoke make on the target and return output. |
577 | 575 | make | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 99 | function | Invoke make to build tflite C++ sources.
Build dependencies:
apt-get install swig libjpeg-dev zlib1g-dev python3-dev python3-nump |
578 | 576 | download_dependencies | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 108 | function | Download build dependencies if haven't done yet. |
579 | 577 | CustomBuildExt | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 114 | class | Customized build extension. |
580 | 578 | get_ext_filename | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 117 | method | |
581 | 579 | run | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 123 | method | |
582 | 580 | CustomBuildPy | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 130 | class | |
583 | 581 | run | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 132 | method | |
584 | 582 | get_pybind_include | tensorflow/tensorflow/lite/tools/pip_package/setup.py | 137 | function | pybind11 include directory is not correctly resolved.
This fixes include directory to /usr/local/pythonX.X
Returns:
include directories to find pybind11 |
585 | 583 | set_signature_defs | tensorflow/tensorflow/lite/tools/signature/signature_def_utils.py | 25 | function | Sets SignatureDefs to the Metadata of a TfLite flatbuffer buffer.
Args:
tflite_model: Binary TFLite model (bytes or bytes-like object) to which to
add signature_def.
signature_def_map: dict containing SignatureDefs to store in metadata.
Returns:
buffer: A TFLite model binary identical to model buffer with
metadata field containing SignatureDef.
Raises:
ValueError:
tflite_model buffer does not contain a valid TFLite model.
signature_def_map is empty or does not contain a SignatureDef. |
586 | 584 | get_signature_defs | tensorflow/tensorflow/lite/tools/signature/signature_def_utils.py | 51 | function | Get SignatureDef dict from the Metadata of a TfLite flatbuffer buffer.
Args:
tflite_model: TFLite model buffer to get the signature_def.
Returns:
dict containing serving names to SignatureDefs if exists, otherwise, empty
dict.
Raises:
ValueError:
tflite_model buffer does not contain a valid TFLite model.
DecodeError:
SignatureDef cannot be parsed from TfLite SignatureDef metadata. |
587 | 585 | clear_signature_defs | tensorflow/tensorflow/lite/tools/signature/signature_def_utils.py | 78 | function | Clears SignatureDefs from the Metadata of a TfLite flatbuffer buffer.
Args:
tflite_model: TFLite model buffer to remove signature_defs.
Returns:
buffer: A TFLite model binary identical to model buffer with
no SignatureDef metadata.
Raises:
ValueError:
tflite_model buffer does not contain a valid TFLite model. |
588 | 586 | read32 | tensorflow/tensorflow/lite/tutorials/dataset.py | 35 | function | Read 4 bytes from bytestream as an unsigned 32-bit integer. |
589 | 587 | check_image_file_header | tensorflow/tensorflow/lite/tutorials/dataset.py | 41 | function | Validate that filename corresponds to images for the MNIST dataset. |
590 | 588 | check_labels_file_header | tensorflow/tensorflow/lite/tutorials/dataset.py | 57 | function | Validate that filename corresponds to labels for the MNIST dataset. |
591 | 589 | download | tensorflow/tensorflow/lite/tutorials/dataset.py | 67 | function | Download (and unzip) a file from the MNIST dataset if not already done. |
592 | 590 | dataset | tensorflow/tensorflow/lite/tutorials/dataset.py | 86 | function | Download and parse MNIST dataset. |
593 | 591 | train | tensorflow/tensorflow/lite/tutorials/dataset.py | 114 | function | tf.data.Dataset object for MNIST training data. |
594 | 592 | run_eval | tensorflow/tensorflow/lite/tutorials/mnist_tflite.py | 47 | function | Performs evaluation for input image over specified model.
Args:
interpreter: TFLite interpreter initialized with model to execute.
input_image: Image input to the model.
Returns:
output: output tensor of model being executed. |
595 | 593 | set_dlopen_flags | tensorflow/tensorflow/python/pywrap_dlopen_global_flags.py | 43 | function | |
596 | 594 | reset_dlopen_flags | tensorflow/tensorflow/python/pywrap_dlopen_global_flags.py | 48 | function | |
597 | 595 | import_graphdef | tensorflow/tensorflow/python/pywrap_mlir.py | 26 | function | |
598 | 596 | experimental_convert_saved_model_to_mlir | tensorflow/tensorflow/python/pywrap_mlir.py | 32 | function | |
599 | 597 | experimental_convert_saved_model_v1_to_mlir | tensorflow/tensorflow/python/pywrap_mlir.py | 39 | function | |
600 | 598 | experimental_run_pass_pipeline | tensorflow/tensorflow/python/pywrap_mlir.py | 48 | function | |
601 | 599 | enable | tensorflow/tensorflow/python/tf2.py | 30 | function | |
602 | 600 | disable | tensorflow/tensorflow/python/tf2.py | 36 | function | |
603 | 601 | enabled | tensorflow/tensorflow/python/tf2.py | 42 | function | |
604 | 602 | AssertTransformer | tensorflow/tensorflow/python/autograph/converters/asserts.py | 27 | class | Transforms Assert nodes to Call so they can be handled as functions. |
605 | 603 | visit_Assert | tensorflow/tensorflow/python/autograph/converters/asserts.py | 30 | method | |
606 | 604 | transform | tensorflow/tensorflow/python/autograph/converters/asserts.py | 50 | function | |
607 | 605 | BreakTransformer | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 39 | class | Canonicalizes break statements into additional conditionals. |
608 | 606 | visit_Break | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 42 | method | |
609 | 607 | visit_While | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 75 | method | |
610 | 608 | visit_For | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 121 | method | |
611 | 609 | transform | tensorflow/tensorflow/python/autograph/converters/break_statements.py | 183 | function | |
612 | 610 | CallTreeTransformer | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 96 | class | Transforms the call tree by renaming transformed symbols. |
613 | 611 | visit_Lambda | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 99 | method | |
614 | 612 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 108 | method | |
615 | 613 | visit_With | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 126 | method | |
616 | 614 | visit_Call | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 160 | method | |
617 | 615 | transform | tensorflow/tensorflow/python/autograph/converters/call_trees.py | 211 | function | Transform function call to the compiled counterparts.
Args:
node: AST
ctx: EntityContext
Returns:
A tuple (node, new_names):
node: The transformed AST
new_names: set(string), containing any newly-generated names |
618 | 616 | MockConvertedCall | tensorflow/tensorflow/python/autograph/converters/call_trees_test.py | 30 | class | |
619 | 617 | ConditionalExpressionTransformer | tensorflow/tensorflow/python/autograph/converters/conditional_expressions.py | 28 | class | Converts conditional expressions to functional form. |
620 | 618 | visit_IfExp | tensorflow/tensorflow/python/autograph/converters/conditional_expressions.py | 31 | method | |
621 | 619 | transform | tensorflow/tensorflow/python/autograph/converters/conditional_expressions.py | 48 | function | |
622 | 620 | ContinueCanonicalizationTransformer | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 60 | class | Canonicalizes continue statements into additional conditionals. |
623 | 621 | visit_Continue | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 63 | method | |
624 | 622 | visit_While | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 125 | method | |
625 | 623 | visit_For | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 132 | method | |
626 | 624 | visit_If | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 140 | method | |
627 | 625 | visit_With | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 145 | method | |
628 | 626 | visit_Try | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 150 | method | |
629 | 627 | visit_ExceptHandler | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 158 | method | |
630 | 628 | transform | tensorflow/tensorflow/python/autograph/converters/continue_statements.py | 163 | function | |
631 | 629 | ControlFlowTransformer | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 46 | class | Transforms control flow structures like loops an conditionals. |
632 | 630 | visit_Lambda | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 49 | method | |
633 | 631 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 54 | method | |
634 | 632 | visit_If | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 206 | method | |
635 | 633 | visit_While | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 261 | method | |
636 | 634 | visit_For | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 309 | method | |
637 | 635 | AnnotatedDef | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 395 | class | |
638 | 636 | transform | tensorflow/tensorflow/python/autograph/converters/control_flow.py | 402 | function | |
639 | 637 | ControlFlowTransformer | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 44 | class | Transforms control flow structures like loops an conditionals. |
640 | 638 | visit_If | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 200 | method | |
641 | 639 | visit_While | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 400 | method | |
642 | 640 | visit_For | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 490 | method | |
643 | 641 | AnnotatedDef | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 623 | class | |
644 | 642 | transform | tensorflow/tensorflow/python/autograph/converters/control_flow_deprecated_py2.py | 630 | function | |
645 | 643 | DirectivesTransformer | tensorflow/tensorflow/python/autograph/converters/directives.py | 90 | class | Parses compiler directives and converts them into AST annotations. |
646 | 644 | visit_Name | tensorflow/tensorflow/python/autograph/converters/directives.py | 117 | method | |
647 | 645 | visit_Attribute | tensorflow/tensorflow/python/autograph/converters/directives.py | 126 | method | |
648 | 646 | visit_Assign | tensorflow/tensorflow/python/autograph/converters/directives.py | 134 | method | |
649 | 647 | visit_AugAssign | tensorflow/tensorflow/python/autograph/converters/directives.py | 138 | method | |
650 | 648 | visit_Expr | tensorflow/tensorflow/python/autograph/converters/directives.py | 142 | method | |
651 | 649 | visit_While | tensorflow/tensorflow/python/autograph/converters/directives.py | 173 | method | |
652 | 650 | visit_For | tensorflow/tensorflow/python/autograph/converters/directives.py | 176 | method | |
653 | 651 | transform | tensorflow/tensorflow/python/autograph/converters/directives.py | 180 | function | |
654 | 652 | FunctionTransformer | tensorflow/tensorflow/python/autograph/converters/functions.py | 38 | class | Wraps function bodies around autograph-specific boilerplate. |
655 | 653 | visit_Lambda | tensorflow/tensorflow/python/autograph/converters/functions.py | 53 | method | |
656 | 654 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/converters/functions.py | 81 | method | |
657 | 655 | transform | tensorflow/tensorflow/python/autograph/converters/functions.py | 134 | function | |
658 | 656 | FunctionTransformer | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 31 | class | |
659 | 657 | f | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 35 | method | Docstring. |
660 | 658 | f | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 49 | method | First sentence.
Second sentence.
Returns:
Something. |
661 | 659 | f | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 68 | method | |
662 | 660 | inner_fn_callee | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 85 | method | |
663 | 661 | f | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 89 | method | |
664 | 662 | f | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 121 | method | |
665 | 663 | inner_fn | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 70 | method | |
666 | 664 | inner_fn | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 90 | method | |
667 | 665 | f | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 104 | method | |
668 | 666 | inner_fn | tensorflow/tensorflow/python/autograph/converters/functions_test.py | 106 | method | |
669 | 667 | ListCompTransformer | tensorflow/tensorflow/python/autograph/converters/list_comprehensions.py | 42 | class | Lowers list comprehensions into standard control flow. |
670 | 668 | visit_Assign | tensorflow/tensorflow/python/autograph/converters/list_comprehensions.py | 45 | method | |
671 | 669 | transform | tensorflow/tensorflow/python/autograph/converters/list_comprehensions.py | 81 | function | |
672 | 670 | ListTransformer | tensorflow/tensorflow/python/autograph/converters/lists.py | 51 | class | Converts lists and related operations to their TF counterpart. |
673 | 671 | visit_List | tensorflow/tensorflow/python/autograph/converters/lists.py | 54 | method | |
674 | 672 | visit_Call | tensorflow/tensorflow/python/autograph/converters/lists.py | 131 | method | |
675 | 673 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/converters/lists.py | 209 | method | |
676 | 674 | visit_For | tensorflow/tensorflow/python/autograph/converters/lists.py | 215 | method | |
677 | 675 | visit_While | tensorflow/tensorflow/python/autograph/converters/lists.py | 221 | method | |
678 | 676 | visit_If | tensorflow/tensorflow/python/autograph/converters/lists.py | 227 | method | |
679 | 677 | visit_With | tensorflow/tensorflow/python/autograph/converters/lists.py | 233 | method | |
680 | 678 | transform | tensorflow/tensorflow/python/autograph/converters/lists.py | 239 | function | |
681 | 679 | LogicalExpressionTransformer | tensorflow/tensorflow/python/autograph/converters/logical_expressions.py | 49 | class | Converts logical expressions to corresponding TF calls. |
682 | 680 | visit_Compare | tensorflow/tensorflow/python/autograph/converters/logical_expressions.py | 83 | method | |
683 | 681 | visit_UnaryOp | tensorflow/tensorflow/python/autograph/converters/logical_expressions.py | 114 | method | |
684 | 682 | visit_BoolOp | tensorflow/tensorflow/python/autograph/converters/logical_expressions.py | 123 | method | |
685 | 683 | transform | tensorflow/tensorflow/python/autograph/converters/logical_expressions.py | 135 | function | |
686 | 684 | ConditionalReturnRewriter | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 43 | class | Rewrites a a pattern where it's unobvious that all paths return a value.
This rewrite allows avoiding intermediate None return values.
The following pattern:
if cond:
<block 1>
return
else:
<block 2>
<block 3>
is converted to:
if cond:
<block 1>
return
else:
<block 2>
<block 3>
and vice-versa (if the else returns, subsequent statements are moved under the
if branch). |
687 | 685 | visit_Return | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 70 | method | |
688 | 686 | visit_While | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 100 | method | |
689 | 687 | visit_For | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 106 | method | |
690 | 688 | visit_With | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 113 | method | |
691 | 689 | visit_Try | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 120 | method | |
692 | 690 | visit_ExceptHandler | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 130 | method | |
693 | 691 | visit_If | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 135 | method | |
694 | 692 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 153 | method | |
695 | 693 | ReturnStatementsTransformer | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 183 | class | Lowers return statements into variables and conditionals.
Specifically, the following pattern:
<block 1>
return val
<block 2>
is converted to:
do_return = False
retval = None
<block 1>
do_return = True
retval = val
if not do_return:
<block 2>
return retval
The conversion adjusts loops as well:
<block 1>
while cond:
<block 2>
return retval
is converted to:
<block 1>
while not do_return and cond:
<block 2>
do_return = True
retval = val |
696 | 694 | visit_Return | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 227 | method | |
697 | 695 | visit_While | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 283 | method | |
698 | 696 | visit_For | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 297 | method | |
699 | 697 | visit_With | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 319 | method | |
700 | 698 | visit_Try | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 324 | method | |
701 | 699 | visit_ExceptHandler | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 331 | method | |
702 | 700 | visit_If | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 335 | method | |
703 | 701 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 341 | method | |
704 | 702 | transform | tensorflow/tensorflow/python/autograph/converters/return_statements.py | 392 | function | Ensure a function has only a single return, at the end. |
705 | 703 | SliceTransformer | tensorflow/tensorflow/python/autograph/converters/slices.py | 28 | class | Converts slicing operations to their TF counterpart.
Currently, relying on the default slice operator that Tensor uses is
insufficient, because TensorArray and tensor lists use dedicated index read
and write functions. |
706 | 704 | visit_Assign | tensorflow/tensorflow/python/autograph/converters/slices.py | 48 | method | |
707 | 705 | visit_Subscript | tensorflow/tensorflow/python/autograph/converters/slices.py | 58 | method | |
708 | 706 | transform | tensorflow/tensorflow/python/autograph/converters/slices.py | 84 | function | |
709 | 707 | VariableAccessTransformer | tensorflow/tensorflow/python/autograph/converters/variables.py | 28 | class | Rewrites basic symbol reads.
This transformer rewrites variable reads with a "read" operator which allows
tracking activity.
Example:
For a basic statement:
a = b + c
This is translated to:
a = ld(b) + ld(c)
Augmented assignment operations also introduce a `ld` operator:
a += b
The assignment target also receives an operator to properly represent the
read:
a = ld(a)
a += ld(b) |
710 | 708 | visit_Name | tensorflow/tensorflow/python/autograph/converters/variables.py | 55 | method | |
711 | 709 | visit_Delete | tensorflow/tensorflow/python/autograph/converters/variables.py | 63 | method | |
712 | 710 | visit_AugAssign | tensorflow/tensorflow/python/autograph/converters/variables.py | 88 | method | |
713 | 711 | transform | tensorflow/tensorflow/python/autograph/converters/variables.py | 100 | function | |
714 | 712 | control_status_ctx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 35 | function | |
715 | 713 | Status | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 40 | class | |
716 | 714 | ControlStatusCtx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 46 | class | A context that tracks whether autograph is enabled by the user. |
717 | 715 | NullCtx | tensorflow/tensorflow/python/autograph/core/ag_ctx.py | 66 | class | Helper substitute for contextlib.nullcontext. |
718 | 716 | Rule | tensorflow/tensorflow/python/autograph/core/config_lib.py | 27 | class | Base class for conversion rules. |
719 | 717 | matches | tensorflow/tensorflow/python/autograph/core/config_lib.py | 33 | method | |
720 | 718 | Action | tensorflow/tensorflow/python/autograph/core/config_lib.py | 38 | class | |
721 | 719 | DoNotConvert | tensorflow/tensorflow/python/autograph/core/config_lib.py | 44 | class | Indicates that this module should be not converted. |
722 | 720 | get_action | tensorflow/tensorflow/python/autograph/core/config_lib.py | 50 | method | |
723 | 721 | Convert | tensorflow/tensorflow/python/autograph/core/config_lib.py | 56 | class | Indicates that this module should be converted. |
724 | 722 | get_action | tensorflow/tensorflow/python/autograph/core/config_lib.py | 62 | method | |
725 | 723 | Feature | tensorflow/tensorflow/python/autograph/core/converter.py | 83 | class | This enumeration represents optional conversion options.
These conversion options are experimental. They are subject to change without
notice and offer no guarantees.
_Example Usage_
```python
optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS
@tf.function(experimental_autograph_options=optionals)
def f(i):
if i == 0: # EQUALITY_OPERATORS allows the use of == here.
tf.print('i is zero')
```
Attributes:
ALL: Enable all features.
AUTO_CONTROL_DEPS: Insert of control dependencies in the generated code.
ASSERT_STATEMENTS: Convert Tensor-dependent assert statements to tf.Assert.
BUILTIN_FUNCTIONS: Convert builtin functions applied to Tensors to
their TF counterparts.
EQUALITY_OPERATORS: Whether to convert the comparison operators, like
equality. This is soon to be deprecated as support is being added to the
Tensor class.
LISTS: Convert list idioms, like initializers, slices, append, etc.
NAME_SCOPES: Insert name scopes that name ops according to context, like the
function they were defined in. |
726 | 724 | all | tensorflow/tensorflow/python/autograph/core/converter.py | 123 | method | Returns a tuple that enables all options. |
727 | 725 | all_but | tensorflow/tensorflow/python/autograph/core/converter.py | 128 | method | Returns a tuple that enables all but the excluded options. |
728 | 726 | ConversionOptions | tensorflow/tensorflow/python/autograph/core/converter.py | 138 | class | Immutable container for global conversion flags.
Attributes:
recursive: bool, whether to recursively convert any user functions or
classes that the converted function may use.
user_requested: bool, whether the conversion was explicitly requested by
the user, as opposed to being performed as a result of other logic. This
value always auto-resets resets to False in child conversions.
optional_features: Union[Feature, Set[Feature]], controls the use of
optional features in the conversion process. See Feature for available
options. |
729 | 727 | as_tuple | tensorflow/tensorflow/python/autograph/core/converter.py | 169 | method | |
730 | 728 | uses | tensorflow/tensorflow/python/autograph/core/converter.py | 183 | method | |
731 | 729 | call_options | tensorflow/tensorflow/python/autograph/core/converter.py | 187 | method | Returns the corresponding options to be used for recursive conversion. |
732 | 730 | to_ast | tensorflow/tensorflow/python/autograph/core/converter.py | 195 | method | Returns a representation of this object as an AST node.
The AST node encodes a constructor that would create an object with the
same contents.
Returns:
ast.Node |
733 | 731 | list_of_features | tensorflow/tensorflow/python/autograph/core/converter.py | 215 | method | |
734 | 732 | ProgramContext | tensorflow/tensorflow/python/autograph/core/converter.py | 236 | class | ProgramContext keeps track of converting function hierarchies.
Attributes:
options: ConversionOptions
autograph_module: Deprecated. Do not use. |
735 | 733 | Base | tensorflow/tensorflow/python/autograph/core/converter.py | 249 | class | All converters should inherit from this class.
Attributes:
ctx: EntityContext |
736 | 734 | get_definition_directive | tensorflow/tensorflow/python/autograph/core/converter.py | 262 | method | Returns the unique directive argument for a symbol.
See lang/directives.py for details on directives.
Example:
# Given a directive in the code:
ag.foo_directive(bar, baz=1)
# One can write for an AST node Name(id='bar'):
get_definition_directive(node, ag.foo_directive, 'baz')
Args:
node: ast.AST, the node representing the symbol for which the directive
argument is needed.
directive: Callable[..., Any], the directive to search.
arg: str, the directive argument to return.
default: Any
Raises:
ValueError: if conflicting annotations have been found |
737 | 735 | visit | tensorflow/tensorflow/python/autograph/core/converter.py | 311 | method | |
738 | 736 | allowlist | tensorflow/tensorflow/python/autograph/core/converter_testing.py | 35 | function | Helper that marks a callable as whtelitisted. |
739 | 737 | is_inside_generated_code | tensorflow/tensorflow/python/autograph/core/converter_testing.py | 47 | function | Tests whether the caller is generated code. Implementation-specific. |
740 | 738 | FunctionScope | tensorflow/tensorflow/python/autograph/core/function_wrappers.py | 33 | class | Context manager that wraps the body of a converted function.
This context manager handles various operations related to the scope of a
function:
* optional TF name scopes - these name scopes match the name of the
function, for easy visualization in tensorBoard;
* optional automatic control dependencies - this adds the same mechanism
for control dependencies that is used by `@tf.function`; it can be
optionally enabled when using `tf.autograph.to_graph`;
* tracking of autograph conversion state (whether it's enabled by the user,
conversion options; |
741 | 739 | ret | tensorflow/tensorflow/python/autograph/core/function_wrappers.py | 91 | method | Marks a value as returned from the function guarded by the scope. |
742 | 740 | with_function_scope | tensorflow/tensorflow/python/autograph/core/function_wrappers.py | 114 | function | Inline version of the FunctionScope context manager. |
743 | 741 | UnsupportedFeaturesChecker | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 26 | class | Quick check for Python features we know we don't support.
Any features detected will cause AutoGraph to not compile a function. |
744 | 742 | visit_Attribute | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 32 | method | |
745 | 743 | visit_For | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 39 | method | |
746 | 744 | visit_While | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 45 | method | |
747 | 745 | visit_Yield | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 53 | method | |
748 | 746 | visit_YieldFrom | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 56 | method | |
749 | 747 | verify | tensorflow/tensorflow/python/autograph/core/unsupported_features_checker.py | 60 | function | |
750 | 748 | is_autograph_strict_conversion_mode | tensorflow/tensorflow/python/autograph/impl/api.py | 72 | function | |
751 | 749 | AutoGraphError | tensorflow/tensorflow/python/autograph/impl/api.py | 82 | class | Base class for all AutoGraph exceptions. |
752 | 750 | ConversionError | tensorflow/tensorflow/python/autograph/impl/api.py | 87 | class | Raised during the conversion process. |
753 | 751 | StagingError | tensorflow/tensorflow/python/autograph/impl/api.py | 92 | class | Raised during the staging (i.e. Python execution) of converted code. |
754 | 752 | StackTraceMapper | tensorflow/tensorflow/python/autograph/impl/api.py | 166 | class | Remaps generated code to code it originated from. |
755 | 753 | get_effective_source_map | tensorflow/tensorflow/python/autograph/impl/api.py | 172 | method | |
756 | 754 | PyToTF | tensorflow/tensorflow/python/autograph/impl/api.py | 203 | class | The TensorFlow AutoGraph transformer. |
757 | 755 | get_transformed_name | tensorflow/tensorflow/python/autograph/impl/api.py | 228 | method | |
758 | 756 | get_extra_locals | tensorflow/tensorflow/python/autograph/impl/api.py | 231 | method | |
759 | 757 | get_caching_key | tensorflow/tensorflow/python/autograph/impl/api.py | 234 | method | |
760 | 758 | initial_analysis | tensorflow/tensorflow/python/autograph/impl/api.py | 237 | method | |
761 | 759 | transform_ast | tensorflow/tensorflow/python/autograph/impl/api.py | 250 | method | |
762 | 760 | autograph_artifact | tensorflow/tensorflow/python/autograph/impl/api.py | 298 | function | |
763 | 761 | is_autograph_artifact | tensorflow/tensorflow/python/autograph/impl/api.py | 303 | function | |
764 | 762 | converted_call | tensorflow/tensorflow/python/autograph/impl/api.py | 307 | function | Converts a function call inline.
For internal use only.
Note: The argument list is optimized for readability of generated code, which
may look like this:
ag__.converted_call(f, (arg1, arg2), None, fscope)
ag__.converted_call(f, (), dict(arg1=val1, **kwargs), fscope)
ag__.converted_call(f, (arg1, arg2) + varargs, dict(**kwargs), lscope)
Args:
f: The function to convert.
args: Tuple, the original positional arguments of f
kwargs: Optional[Dict], the original keyword arguments of f
caller_fn_scope: Optional[function_wrappers.FunctionScope], the function
scope of the converted function in which this call was originally made.
options: Optional[converter.ConversionOptions], conversion options. If not
specified, the value of caller_fn_scope.callopts is used. Either options
or caller_fn_scope must be present.
Returns:
Any, the result of executing a possibly-converted `f` with the given
arguments. |
765 | 763 | tf_convert | tensorflow/tensorflow/python/autograph/impl/api.py | 506 | function | Decorator that applies AutoGraph to a function.
Use in internal APIs.
This API is suitable for high order functions internal to the TensorFlow API,
and more generally any function to which Autograph is not applied.
Guidance: convert was a decorator meant for use directly by developers, and
will be soon deprecated in favor of tf.function. tf_convert is to be called
from high order functions internal to TF.
Args:
f: Callable.
ctx: ag_ctx.ControlStatusCtx, the Autograph context in which `f` is used.
convert_by_default: bool, whether to use AutoGraph when the context doesn't
specify.
user_requested: bool, whether to ignore the conversion allowlist. See
ConversionOptions.user_requested.
Returns:
Either `f or the converted version of `f`. |
766 | 764 | call_with_unspecified_conversion_status | tensorflow/tensorflow/python/autograph/impl/api.py | 565 | function | Decorator that resets the conversion context to the unspecified status. |
767 | 765 | do_not_convert | tensorflow/tensorflow/python/autograph/impl/api.py | 599 | function | Decorator that suppresses the conversion of a function.
Args:
func: function to decorate.
Returns:
If `func` is not None, returns a `Callable` which is equivalent to
`func`, but is not converted by AutoGraph.
If `func` is None, returns a decorator that, when invoked with a
single `func` argument, returns a `Callable` equivalent to the
above case. |
768 | 766 | convert | tensorflow/tensorflow/python/autograph/impl/api.py | 626 | function | Decorator that compiles a function to use TensorFlow ops.
The decorator is dynamic - it recompiles the target whenever the decorated
function is called. This means the parameter values are known at conversion.
It also means that repeated calls with different types of parameters will be
correctly processed.
Args:
recursive: bool, whether to recursively convert any functions or classes
that the converted function may use.
optional_features: converted.Feature, allows toggling optional or
experimental features. When set to None, only the core features are
enabled.
user_requested: bool, whether this is a function that the user explicitly
asked to be converted. See ConversionOptions.user_requested.
conversion_ctx: Optional ag_ctx.ControlStatusCtx, the Autograph context in
which `f` is used.
Returns:
Callable, a decorator that converts the given function into an equivalent
function that uses TensorFlow ops. |
769 | 767 | to_graph | tensorflow/tensorflow/python/autograph/impl/api.py | 682 | function | Converts a Python entity into a TensorFlow graph.
Also see: `tf.autograph.to_code`, `tf.function`.
Unlike `tf.function`, `to_graph` is a low-level transpiler that converts
Python code to TensorFlow graph code. It does not implement any caching,
variable management or create any actual ops, and is best used where greater
control over the generated TensorFlow graph is desired. Another difference
from `tf.function` is that `to_graph` will not wrap the graph into a
TensorFlow function or a Python callable. Internally, `tf.function` uses
`to_graph`.
Example usage:
>>> def f(x):
... if x > 0:
... y = x * x
... else:
... y = -x
... return y
...
>>> converted_f = to_graph(f)
>>> x = tf.constant(2)
>>> converted_f(x) # converted_foo is like a TensorFlow Op.
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Supported Python entities include:
* functions
* classes
* object methods
Functions are converted into new functions with converted code.
Classes are converted by generating a new class whose methods use converted
code.
Methods are converted into unbound function that have an additional first
argument called `self`.
For a tutorial, see the
[tf.function and AutoGraph guide](https://www.tensorflow.org/guide/function).
For more detailed information, see the
[AutoGraph reference documentation](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/index.md).
Args:
entity: Python callable or class to convert.
recursive: Whether to recursively convert any functions that the converted
function may call.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
Same as `entity`, the converted Python function or class.
Raises:
ValueError: If the entity could not be converted. |
770 | 768 | to_graph_v1 | tensorflow/tensorflow/python/autograph/impl/api.py | 754 | function | Converts a Python entity into a TensorFlow graph.
Also see: `tf.autograph.to_code`, `tf.function`.
Unlike `tf.function`, `to_graph` is a low-level transpiler that converts
Python code to TensorFlow graph code. It does not implement any caching,
variable management or create any actual ops, and is best used where greater
control over the generated TensorFlow graph is desired. Another difference
from `tf.function` is that `to_graph` will not wrap the graph into a
TensorFlow function or a Python callable. Internally, `tf.function` uses
`to_graph`.
_Example Usage_
```python
def foo(x):
if x > 0:
y = x * x
else:
y = -x
return y
converted_foo = to_graph(foo)
x = tf.constant(1)
y = converted_foo(x) # converted_foo is a TensorFlow Op-like.
assert is_tensor(y)
```
Supported Python entities include:
* functions
* classes
* object methods
Functions are converted into new functions with converted code.
Classes are converted by generating a new class whose methods use converted
code.
Methods are converted into unbound function that have an additional first
argument called `self`.
Args:
entity: Python callable or class to convert.
recursive: Whether to recursively convert any functions that the converted
function may call.
arg_values: Deprecated.
arg_types: Deprecated.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
Same as `entity`, the converted Python function or class.
Raises:
ValueError: If the entity could not be converted. |
771 | 769 | to_code_v1 | tensorflow/tensorflow/python/autograph/impl/api.py | 825 | function | Returns the source code generated by AutoGraph, as a string.
Example usage:
>>> def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f)
"...def tf__f(x):..."
Also see: `tf.autograph.to_graph`.
Note: If a function has been decorated with `tf.function`, pass its
underlying Python function, rather than the callable that `tf.function
creates:
>>> @tf.function
... def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f.python_function)
"...def tf__f(x):..."
Args:
entity: Python callable or class.
recursive: Whether to recursively convert any functions that the converted
function may call.
arg_values: Deprecated.
arg_types: Deprecated.
indentation: Deprecated.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
The converted code as string. |
772 | 770 | to_code | tensorflow/tensorflow/python/autograph/impl/api.py | 879 | function | Returns the source code generated by AutoGraph, as a string.
Example usage:
>>> def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f)
"...def tf__f(x):..."
Also see: `tf.autograph.to_graph`.
Note: If a function has been decorated with `tf.function`, pass its
underlying Python function, rather than the callable that `tf.function
creates:
>>> @tf.function
... def f(x):
... if x < 0:
... x = -x
... return x
>>> tf.autograph.to_code(f.python_function)
"...def tf__f(x):..."
Args:
entity: Python callable or class to convert.
recursive: Whether to recursively convert any functions that the converted
function may call.
experimental_optional_features: `None`, a tuple of, or a single
`tf.autograph.experimental.Feature` value.
Returns:
The converted code as string. |
773 | 771 | is_unsupported | tensorflow/tensorflow/python/autograph/impl/conversion.py | 73 | function | Checks whether an entity is supported by AutoGraph at all. |
774 | 772 | is_allowlisted | tensorflow/tensorflow/python/autograph/impl/conversion.py | 116 | function | Checks whether an entity is allowed for use in graph mode.
Examples of allowed entities include all members of the tensorflow
package.
Args:
o: A Python entity.
check_call_override: Reserved for internal use. When set to `False`, it
disables the rule according to which classes are allowed if their
__call__ method is allowed.
allow_namedtuple_subclass: Reserved for internal use. When `True`,
namedtuple subclasses are not allowed.
Returns:
Boolean |
775 | 773 | is_in_allowlist_cache | tensorflow/tensorflow/python/autograph/impl/conversion.py | 221 | function | |
776 | 774 | cache_allowlisted | tensorflow/tensorflow/python/autograph/impl/conversion.py | 229 | function | |
777 | 775 | set_element_type | tensorflow/tensorflow/python/autograph/lang/directives.py | 33 | function | Indicates that the entity is expected hold items of specified type/shape.
The staged TensorFlow ops will reflect and assert this data type. Ignored
otherwise.
Args:
entity: The entity to annotate.
dtype: TensorFlow dtype value to assert for entity.
shape: Optional shape to assert for entity. |
778 | 776 | set_loop_options | tensorflow/tensorflow/python/autograph/lang/directives.py | 50 | function | Specifies additional arguments to be passed to the enclosing while_loop.
The parameters apply to and only to the immediately enclosing loop. It only
has effect if the loop is staged as a TF while_loop; otherwise the parameters
have no effect.
Usage:
>>> @tf.function(autograph=True)
... def f():
... n = 0
... for i in tf.range(10):
... tf.autograph.experimental.set_loop_options(maximum_iterations=3)
... n += 1
... return n
>>> @tf.function(autograph=True)
... def f():
... v = tf.constant((0,))
... for i in tf.range(3):
... tf.autograph.experimental.set_loop_options(
... shape_invariants=[(v, tf.TensorShape([None]))]
... )
... v = tf.concat((v, [i]), 0)
... return v
Also see tf.while_loop.
Args:
parallel_iterations: The maximum number of iterations allowed to run in
parallel at any given time. Note that this does not guarantee parallel
execution.
swap_memory: Whether to store intermediate values needed for
gradients on the CPU instead of GPU.
maximum_iterations: Allows limiting the total number of iterations executed
by the loop.
shape_invariants: Allows controlling the argument with the same name passed
to tf.while_loop. Unlike tf.while_loop, this is a list of
`(tensor, shape)` pairs. |
779 | 777 | match_staging_level | tensorflow/tensorflow/python/autograph/lang/special_functions.py | 50 | function | Casts a value to be staged at the same level as another. |
780 | 778 | tensor_list | tensorflow/tensorflow/python/autograph/lang/special_functions.py | 57 | function | Creates an tensor list and populates it with the given elements.
This function provides a more uniform access to tensor lists and tensor
arrays, and allows optional initialization.
Note: this function is a simplified wrapper. If you need greater control,
it is recommended to use the underlying implementation directly.
Args:
elements: Iterable[tf.Tensor, ...], the elements to initially fill the list
with
element_dtype: Optional[tf.DType], data type for the elements in the list;
required if the list is empty
element_shape: Optional[tf.TensorShape], shape for the elements in the list;
required if the list is empty
use_tensor_array: bool, whether to use the more compatible but restrictive
tf.TensorArray implementation
Returns:
Union[tf.Tensor, tf.TensorArray], the new list.
Raises:
ValueError: for invalid arguments |
781 | 779 | stack | tensorflow/tensorflow/python/autograph/lang/special_functions.py | 92 | function | Stacks the input, if it admits the notion of stacking.
For example, a list of tensors can be stacked into a larger tensor. This
function is similar to tf.stack, but it accepts non-lists and lists of
non-tensors as arguments. In the latter case, the function does nothing.
Args:
list_or_tensor: Any
element_dtype: tf.DType, optional dtypedtype for the elements in the list.
Required if the input is stackable, and the list is untyped.
strict: bool, if True an error is raised if the input is not stackable.
Otherwise the function is a no-op.
Returns:
Any, if the input is stackable, the result will be a tf.Tensor. Otherwise,
if strict=False, the result will be list_or_tensor.
Raises:
ValueError: if strict=True and the input is not stackable. |
782 | 780 | if_exp | tensorflow/tensorflow/python/autograph/operators/conditional_expressions.py | 27 | function | |
783 | 781 | verify_single_cond_var | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 233 | function | Verifies whether body_var and orelse_var are consistent. |
784 | 782 | for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 291 | function | Functional form of a for statement.
The loop operates on a state, which includes all symbols that are
variant across loop iterations, excluding the variables local to the loop.
For example, given the loop below that calculates the geometric and
arithmetic means or some numbers:
```
geo_mean = 1
arith_mean = 0
for i in range(n):
a = numbers[i]
geo_mean *= a
arith_mean += a
```
The state is represented by the variables geo_mean and arith_mean. The
`extra_test`, `body`, `get_state` and `set_state` functions must bind to the
original `geo_mean` and `arith_mean` symbols, using `nonlocal`.
The inputs and outputs of the callables representing the loop blocks are not
explicit - instead, these functions must use nonlocal/global for side effects.
The inputs and outputs are instead controlled by the set_state/get_state
functions.
Args:
iter_: The entity being iterated over.
extra_test: Callable with boolean return type.
An additional loop condition.
body: Callable representing the actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
symbol_names: Tuple containing names of the loop variables returned by
get_state.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
785 | 783 | while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 727 | function | Functional form of a while statement.
The loop operates on a so-called state, which includes all symbols that are
variant across loop iterations. In what follows we refer to state as either
a tuple of entities that represent an actual state, or a list of arguments
of the corresponding types.
The inputs and outputs of the callables representing the loop blocks are not
explicit - instead, these functions must use nonlocal/global for side effects.
The inputs and outputs are instead controlled by the set_state/get_state
functions.
Args:
test: Callable with boolean return type. The loop condition.
body: Callable representing the actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
symbol_names: Tuple containing the names of all loop variables.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
786 | 784 | if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow.py | 915 | function | Functional form of an if statement.
The conditional operates on a state, which includes all symbols whose values
are a function of the branch taken.
For example, given the code below that calculates the abs function:
```
x = 1
if x > 0:
x = -x
```
The state is represented by the variable `x`. The `body, `orelse` and
`set_state` functions must bind to the original `x` symbol, using `nonlocal`.
The inputs and outputs of the callables representing the loop blocks are not
explicit - instead, these functions must use nonlocal/global for side effects.
The inputs and outputs are instead controlled by the set_state/get_state
functions.
Args:
cond: Boolean.
body: Callable representing the main block of the conditional.
orelse: Callable representing the else block of the conditional.
get_state: Function that returns a tuple containing the values of all
composite symbols modified within the conditional. This allows access to
state that branches may mutate through side effects. This function is not
needed and should not be called when dispatching to code matching Python's
default semantics. This is useful for checkpointing to avoid unintended
side-effects when staging requires evaluating all code-paths.
set_state: Function to set the values of all composite symbols modified
within the conditional. This is the complement to get_state, used to
restore checkpointed values. The single argument a tuple containing values
for each composite symbol that may be modified in a branch of the
conditional. The is usually the result of a call to get_state.
symbol_names: Tuple containing basic loop var names.
nouts: Number of variables output by the statement. Vars which are
not outputs will not be passed through staged control flow such as
tf.cond. This includes variables that are defined before the conditional,
but are not used after it. |
787 | 785 | for_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 279 | function | Functional form of a for statement.
The loop operates on a state, which includes all symbols that are
variant across loop iterations, excluding the iterate as well as the
variables local to the loop.
For example, given the loop below that calculates the geometric and
arithmetic means or some numbers:
geo_mean = 1
arith_mean = 0
for i in range(n):
a = numbers[i]
geo_mean *= a
arith_mean += a
The state is represented by the variables geo_mean and arith_mean. The
argument for initial_state may contain the tuple (1, 0), the body will
include the arguments geo_mean and arith_mean and will return a tuple
representing the new values for geo_mean and respectively arith_mean.
Args:
iter_: The entity being iterated over.
extra_test: Callable with the state as arguments, and boolean return type.
An additional loop condition.
body: Callable with the iterate and the state as arguments, and state as
return type. The actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
init_vars: Tuple containing the initial state.
basic_symbol_names: Tuple containing basic loop var names.
composite_symbol_names: Tuple containing composite loop var names.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
788 | 786 | while_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 817 | function | Functional form of a while statement.
The loop operates on a so-called state, which includes all symbols that are
variant across loop iterations. In what follows we refer to state as either
a tuple of entities that represent an actual state, or a list of arguments
of the corresponding types.
Args:
test: Callable with the state as arguments, and boolean return type. The
loop condition.
body: Callable with the state as arguments, and state as return type. The
actual loop body.
get_state: Additional callable which can capture additional state (such as
the values of composite symbols). This is only useful when staging the
loop.
set_state: Additional callable which save values captured by get_state back
into the Python environment. This is only useful when staging the loop.
init_vars: Tuple containing the initial state.
basic_symbol_names: Tuple containing basic loop var names.
composite_symbol_names: Tuple containing composite loop var names.
opts: Optional dict of extra loop parameters.
Returns:
Tuple containing the final state. |
789 | 787 | if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 1008 | function | Functional form of an if statement.
Args:
cond: Boolean.
body: Callable with no arguments, and outputs of the positive (if) branch as
return type.
orelse: Callable with no arguments, and outputs of the negative (else)
branch as return type.
get_state: Function that returns a tuple containing the values of all
composite symbols modified within the conditional. This allows access to
state that branches may mutate through side effects. This function is not
needed and should not be called when dispatching to code matching Python's
default semantics. This is useful for checkpointing to avoid unintended
side-effects when staging requires evaluating all code-paths.
set_state: Function to set the values of all composite symbols modified
within the conditional. This is the complement to get_state, used to
restore checkpointed values. The single argument a tuple containing values
for each composite symbol that may be modified in a branch of the
conditional. The is usually the result of a call to get_state.
basic_symbol_names: Tuple containing basic loop var names.
composite_symbol_names: Tuple containing composite loop var names.
Returns:
Tuple containing the statement outputs. |
790 | 788 | tf_if_stmt | tensorflow/tensorflow/python/autograph/operators/control_flow_deprecated_py2.py | 1048 | function | Overload of if_stmt that stages a TF cond. |
791 | 789 | new_list | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 36 | function | The list constructor.
Args:
iterable: Optional elements to fill the list with.
Returns:
A list-like object. The exact return value depends on the initial elements. |
792 | 790 | tf_tensor_array_new | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 57 | function | Overload of new_list that stages a Tensor list creation. |
793 | 791 | tf_tensor_list_new | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 107 | function | Overload of new_list that stages a Tensor list creation. |
794 | 792 | list_append | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 171 | function | The list append function.
Note: it is unspecified where list_ will be mutated or not. If list_ is
a TensorFlow entity, it will not be typically mutated. If list_ is a plain
list, it will be. In general, if the list is mutated then the return value
should point to the original entity.
Args:
list_: An entity that supports append semantics.
x: The element to append.
Returns:
Same as list_, after the append was performed.
Raises:
ValueError: if list_ is not of a known list-like type. |
795 | 793 | ListPopOpts | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 230 | class | |
796 | 794 | list_pop | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 235 | function | The list pop function.
Note: it is unspecified where list_ will be mutated or not. If list_ is
a TensorFlow entity, it will not be typically mutated. If list_ is a plain
list, it will be. In general, if the list is mutated then the return value
should point to the original entity.
Args:
list_: An entity that supports pop semantics.
i: Optional index to pop from. May be None.
opts: A ListPopOpts.
Returns:
Tuple (x, out_list_):
out_list_: same as list_, after the removal was performed.
x: the removed element value.
Raises:
ValueError: if list_ is not of a known list-like type or the operation is
not supported for that type. |
797 | 795 | ListStackOpts | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 299 | class | |
798 | 796 | list_stack | tensorflow/tensorflow/python/autograph/operators/data_structures.py | 305 | function | The list stack function.
This does not have a direct correspondent in Python. The closest idiom to
this is tf.append or np.stack. It's different from those in the sense that it
accepts a Tensor list, rather than a list of tensors. It can also accept
TensorArray. When the target is anything else, the dispatcher will rely on
ctx.original_call for fallback.
Args:
list_: An entity that supports append semantics.
opts: A ListStackOpts object.
Returns:
The output of the stack operation, typically a Tensor. |
799 | 797 | DispatchContext | tensorflow/tensorflow/python/autograph/operators/dispatch_context.py | 27 | class | Allows passing additional parameters to the specific implementations.
Attributes:
options: Optional dict of extra arguments that may be required by specific
implementations. |
800 | 798 | option | tensorflow/tensorflow/python/autograph/operators/dispatch_context.py | 37 | method | |
801 | 799 | assert_stmt | tensorflow/tensorflow/python/autograph/operators/exceptions.py | 26 | function | Functional form of an assert statement.
This follows the semantics of the Python assert statement, however the
concrete implementations may deviate from it. See the respective
implementation for details.
In general, the assert statement should not be used for control flow.
Furthermore, it is encouraged that the assertion expressions should not have
side effects.
Args:
expression1: Any
expression2: Callable[[], Any], returns the expression to include in the
error message when expression1 evaluates to False. When expression1 is
True, the result of expression2 will not be evaluated, however,
expression2 itself may be evaluated in some implementations.
Returns:
Any, implementation-dependent.
Raises:
ValueError: if any arguments are illegal. |
802 | 800 | not_ | tensorflow/tensorflow/python/autograph/operators/logical.py | 26 | function | Functional form of "not". |
803 | 801 | and_ | tensorflow/tensorflow/python/autograph/operators/logical.py | 43 | function | Functional form of "and". Uses lazy evaluation semantics. |
804 | 802 | or_ | tensorflow/tensorflow/python/autograph/operators/logical.py | 62 | function | Functional form of "or". Uses lazy evaluation semantics. |
805 | 803 | eq | tensorflow/tensorflow/python/autograph/operators/logical.py | 81 | function | Functional form of "equal". |
806 | 804 | not_eq | tensorflow/tensorflow/python/autograph/operators/logical.py | 98 | function | Functional form of "not-equal". |
807 | 805 | overload_of | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 65 | function | |
808 | 806 | locals_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 92 | function | Executes the locals function in the context of a specified function. |
809 | 807 | globals_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 97 | function | Executes the locals function in the context of a specified function. |
810 | 808 | eval_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 102 | function | Executes the eval function in the context of a specified function. |
811 | 809 | super_in_original_context | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 117 | function | Executes the super function in the context of a specified function.
See https://docs.python.org/3/library/functions.html#super for the exact
details
Args:
f: Callable, typically the super builtin
args: List[Any], the original call arguments
caller_fn_scope: Optional[function_wrappers.FunctionScope], the function
scope of the converted function in which this call was originally made
Returns:
The result of calling `f` as if it was called in the frame indicated by
`caller_fn_scope`. |
812 | 810 | abs_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 179 | function | |
813 | 811 | float_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 202 | function | |
814 | 812 | int_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 219 | function | |
815 | 813 | len_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 241 | function | |
816 | 814 | print_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 318 | function | Overload of the print builtin. |
817 | 815 | range_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 360 | function | |
818 | 816 | enumerate_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 391 | function | |
819 | 817 | zip_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 409 | function | |
820 | 818 | map_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 423 | function | |
821 | 819 | next_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 437 | function | |
822 | 820 | next_tf_iterator | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 509 | function | |
823 | 821 | next_py | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 521 | function | |
824 | 822 | filter_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 527 | function | |
825 | 823 | any_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 541 | function | |
826 | 824 | all_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 570 | function | |
827 | 825 | sorted_ | tensorflow/tensorflow/python/autograph/operators/py_builtins.py | 596 | function | |
828 | 826 | GetItemOpts | tensorflow/tensorflow/python/autograph/operators/slices.py | 34 | class | |
829 | 827 | get_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 38 | function | The slice read operator (i.e. __getitem__).
Note: it is unspecified whether target will be mutated or not. In general,
if target is mutable (like Python lists), it will be mutated.
Args:
target: An entity that supports getitem semantics.
i: Index to read from.
opts: A GetItemOpts object.
Returns:
The read element.
Raises:
ValueError: if target is not of a supported type. |
830 | 828 | set_item | tensorflow/tensorflow/python/autograph/operators/slices.py | 100 | function | The slice write operator (i.e. __setitem__).
Note: it is unspecified whether target will be mutated or not. In general,
if target is mutable (like Python lists), it will be mutated.
Args:
target: An entity that supports setitem semantics.
i: Index to modify.
x: The new element value.
Returns:
Same as target, after the update was performed.
Raises:
ValueError: if target is not of a supported type. |
831 | 829 | ld | tensorflow/tensorflow/python/autograph/operators/variables.py | 22 | function | Load variable operator. |
832 | 830 | ldu | tensorflow/tensorflow/python/autograph/operators/variables.py | 29 | function | Load variable operator that returns Undefined when failing to evaluate.
Note: the name ("load or return undefined") is abbreviated to minimize
the amount of clutter in generated code.
This variant of `ld` is useful when loading symbols that may be undefined at
runtime, such as composite symbols, and whether they are defined or not cannot
be determined statically. For example `d['a']` is undefined when `d` is an
empty dict.
Args:
load_v: Lambda that executes the actual read.
name: Human-readable name of the symbol being read.
Returns:
Either the value of the symbol, or Undefined, if the symbol is not fully
defined. |
833 | 831 | Undefined | tensorflow/tensorflow/python/autograph/operators/variables.py | 54 | class | Represents an undefined symbol in Python.
This is used to reify undefined symbols, which is required to use the
functional form of loops.
Example:
while n > 0:
n = n - 1
s = n
return s # Runtime error if n == 0
This is valid Python code and will not result in an error as long as n
is positive. The use of this class is to stay as close to Python semantics
as possible for staged code of this nature.
Converted version of the above showing the possible usage of this class:
s = Undefined('s')
init_state = (s,)
s = while_loop(cond, body, init_state)
return s # s is an instance of Undefined if the loop never runs
Attributes:
symbol_name: Text, identifier for the undefined symbol |
834 | 832 | read | tensorflow/tensorflow/python/autograph/operators/variables.py | 86 | method | |
835 | 833 | UndefinedReturnValue | tensorflow/tensorflow/python/autograph/operators/variables.py | 106 | class | Represents a return value that is undefined. |
836 | 834 | NoValue | tensorflow/tensorflow/python/autograph/pyct/anno.py | 37 | class | |
837 | 835 | Basic | tensorflow/tensorflow/python/autograph/pyct/anno.py | 43 | class | Container for basic annotation keys.
The enum values are used strictly for documentation purposes. |
838 | 836 | Static | tensorflow/tensorflow/python/autograph/pyct/anno.py | 67 | class | Container for static analysis annotation keys.
The enum values are used strictly for documentation purposes. |
839 | 837 | keys | tensorflow/tensorflow/python/autograph/pyct/anno.py | 110 | function | |
840 | 838 | getanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 116 | function | |
841 | 839 | hasanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 123 | function | |
842 | 840 | setanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 127 | function | |
843 | 841 | delanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 137 | function | |
844 | 842 | copyanno | tensorflow/tensorflow/python/autograph/pyct/anno.py | 145 | function | |
845 | 843 | dup | tensorflow/tensorflow/python/autograph/pyct/anno.py | 154 | function | Recursively copies annotations in an AST tree.
Args:
node: ast.AST
copy_map: Dict[Hashable, Hashable], maps a source anno key to a destination
key. All annotations with the source key will be copied to identical
annotations with the destination key.
field_name: str |
846 | 844 | CleanCopier | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 30 | class | NodeTransformer-like visitor that copies an AST. |
847 | 845 | copy | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 37 | method | Returns a deep copy of node (excluding some fields, see copy_clean). |
848 | 846 | copy_clean | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 63 | function | Creates a deep copy of an AST.
The copy will not include fields that are prefixed by '__', with the
exception of user-specified annotations.
Args:
node: ast.AST
preserve_annos: Optional[Set[Hashable]], annotation keys to include in the
copy
Returns:
ast.AST |
849 | 847 | SymbolRenamer | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 79 | class | Transformer that can rename symbols to a simple names. |
850 | 848 | visit_Nonlocal | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 106 | method | |
851 | 849 | visit_Global | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 110 | method | |
852 | 850 | visit_Name | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 114 | method | |
853 | 851 | visit_Attribute | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 117 | method | |
854 | 852 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 123 | method | |
855 | 853 | rename_symbols | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 130 | function | Renames symbols in an AST. Requires qual_names annotations. |
856 | 854 | keywords_to_dict | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 140 | function | Converts a list of ast.keyword objects to a dict. |
857 | 855 | PatternMatcher | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 150 | class | Matches a node against a pattern represented by a node. |
858 | 856 | compare_and_visit | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 158 | method | |
859 | 857 | no_match | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 164 | method | |
860 | 858 | is_wildcard | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 168 | method | |
861 | 859 | generic_visit | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 177 | method | |
862 | 860 | matches | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 214 | function | Basic pattern matcher for AST.
The pattern may contain wildcards represented by the symbol '_'. A node
matches a pattern if for every node in the tree, either there is a node of
the same type in pattern, or a Name node with id='_'.
Args:
node: ast.AST
pattern: ast.AST
Returns:
bool |
863 | 861 | apply_to_single_assignments | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 236 | function | Applies a function to each individual assignment.
This function can process a possibly-unpacked (e.g. a, b = c, d) assignment.
It tries to break down the unpacking if possible. In effect, it has the same
effect as passing the assigned values in SSA form to apply_fn.
Examples:
The following will result in apply_fn(a, c), apply_fn(b, d):
a, b = c, d
The following will result in apply_fn(a, c[0]), apply_fn(b, c[1]):
a, b = c
The following will result in apply_fn(a, (b, c)):
a = b, c
It uses the visitor pattern to allow subclasses to process single
assignments individually.
Args:
targets: Union[List[ast.AST, ...], Tuple[ast.AST, ...], ast.AST, should be
used with the targets field of an ast.Assign node
values: ast.AST
apply_fn: Callable[[ast.AST, ast.AST], None], called with the
respective nodes of each single assignment |
864 | 862 | parallel_walk | tensorflow/tensorflow/python/autograph/pyct/ast_util.py | 283 | function | Walks two ASTs in parallel.
The two trees must have identical structure.
Args:
node: Union[ast.AST, Iterable[ast.AST]]
other: Union[ast.AST, Iterable[ast.AST]]
Yields:
Tuple[ast.AST, ast.AST]
Raises:
ValueError: if the two trees don't have identical structure. |
865 | 863 | CodeObjectCache | tensorflow/tensorflow/python/autograph/pyct/cache.py | 63 | class | A function cache based on code objects.
Code objects are good proxies for the source code of a function.
This cache efficiently handles functions that share code objects, such as
functions defined in a loop, bound methods, etc.
The cache falls back to the function object, if it doesn't have a code object. |
866 | 864 | UnboundInstanceCache | tensorflow/tensorflow/python/autograph/pyct/cache.py | 81 | class | A function cache based on unbound function objects.
Using the function for the cache key allows efficient handling of object
methods.
Unlike the _CodeObjectCache, this discriminates between different functions
even if they have the same code. This is needed for decorators that may
masquerade as another function. |
867 | 865 | Node | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 54 | class | A node in the CFG.
Although new instances of this class are mutable, the objects that a user
finds in the CFG are typically not.
The nodes represent edges in the CFG graph, and maintain pointers to allow
efficient walking in both forward and reverse order. The following property
holds for all nodes: "child in node.next" iff "node in child.prev".
Attributes:
next: FrozenSet[Node, ...], the nodes that follow this node, in control
flow order
prev: FrozenSet[Node, ...], the nodes that precede this node, in reverse
control flow order
ast_node: ast.AST, the AST node corresponding to this CFG node |
868 | 866 | freeze | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 77 | method | |
869 | 867 | Graph | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 95 | class | A Control Flow Graph.
The CFG maintains an index to allow looking up a CFG node by the AST node to
which it is associated. The index can also be enumerated in top-down, depth
first order.
Walking the graph in forward or reverse order is supported by double
parent-child links.
Note: the error nodes are not wired to their corresponding finally guards,
because these are shared, and wiring them would create a reverse path from
normal control flow into the error nodes, which we want to avoid.
The graph also maintains edges corresponding to higher level statements
like for-else loops. A node is considered successor of a statement if there
is an edge from a node that is lexically a child of that statement to a node
that is not. Statement predecessors are analogously defined.
Attributes:
entry: Node, the entry node
exit: FrozenSet[Node, ...], the exit nodes
error: FrozenSet[Node, ...], nodes that exit due to an explicitly raised
error (errors propagated from function calls are not accounted)
index: Dict[ast.Node, Node], mapping AST nodes to the respective CFG
node
stmt_prev: Dict[ast.Node, FrozenSet[Node, ...]], mapping statement AST
nodes to their predecessor CFG nodes
stmt_next: Dict[ast.Node, FrozenSet[Node, ...]], mapping statement AST
nodes to their successor CFG nodes |
870 | 868 | as_dot | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 133 | method | Print CFG in DOT format. |
871 | 869 | GraphVisitor | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 152 | class | Base class for a CFG visitors.
This implementation is not thread safe.
The visitor has some facilities to simplify dataflow analyses. In particular,
it allows revisiting the nodes at the decision of the subclass. This can be
used to visit the graph until the state reaches a fixed point.
For more details on dataflow analysis, see
https://www.seas.harvard.edu/courses/cs252/2011sp/slides/Lec02-Dataflow.pdf
Note: the literature generally suggests visiting successor nodes only when the
state of the current node changed, regardless of whether that successor has
ever been visited. This implementation visits every successor at least once.
Attributes:
graph: Graph
in_: Dict[Node, Any], stores node-keyed state during a visit
out: Dict[Node, Any], stores node-keyed state during a visit |
872 | 870 | init_state | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 178 | method | State initialization function. Optional to overload.
An in/out state slot will be created for each node in the graph. Subclasses
must overload this to control what that is initialized to.
Args:
node: Node |
873 | 871 | visit_node | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 190 | method | Visitor function.
Args:
node: Node
Returns:
bool, whether the node should be revisited; subclasses can visit every
reachable node exactly once by always returning False |
874 | 872 | reset | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 201 | method | |
875 | 873 | can_ignore | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 209 | method | Returns True if the node can safely be assumed not to touch variables. |
876 | 874 | visit_forward | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 245 | method | |
877 | 875 | visit_reverse | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 248 | method | |
878 | 876 | GraphBuilder | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 252 | class | Builder that constructs a CFG from a given AST.
This GraphBuilder facilitates constructing the DAG that forms the CFG when
nodes
are supplied in lexical order (i.e., top-down, depth first). Under these
conditions, it supports building patterns found in typical structured
programs.
This builder ignores the flow generated by exceptions, which are assumed to
always be catastrophic and present purely for diagnostic purposes (e.g. to
print debug information). Statements like raise and try/catch sections are
allowed and will generate control flow edges, but ordinary statements are
assumed not to raise exceptions.
Finally sections are also correctly interleaved between break/continue/return
nodes and their subsequent statements.
Important concepts:
* nodes - nodes refer refer to CFG nodes; AST nodes are qualified explicitly
* leaf set - since the graph is constructed gradually, a leaf set maintains
the CFG nodes that will precede the node that the builder expects to
receive next; when an ordinary node is added, it is connected to the
existing leaves and it in turn becomes the new leaf
* jump nodes - nodes that should generate edges other than what
ordinary nodes would; these correspond to break, continue and return
statements
* sections - logical delimiters for subgraphs that require special
edges; there are various types of nodes, each admitting various
types of jump nodes; sections are identified by their corresponding AST
node |
879 | 877 | reset | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 292 | method | Resets the state of this factory. |
880 | 878 | begin_statement | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 372 | method | Marks the beginning of a statement.
Args:
stmt: Hashable, a key by which the statement can be identified in
the CFG's stmt_prev and stmt_next attributes |
881 | 879 | end_statement | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 381 | method | Marks the end of a statement.
Args:
stmt: Hashable, a key by which the statement can be identified in
the CFG's stmt_prev and stmt_next attributes; must match a key
previously passed to begin_statement. |
882 | 880 | add_ordinary_node | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 391 | method | Grows the graph by adding an ordinary CFG node.
Ordinary nodes are followed by the next node, in lexical order, that is,
they become the new leaf set.
Args:
ast_node: ast.AST
Returns:
Node |
883 | 881 | add_exit_node | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 438 | method | Grows the graph by adding an exit node.
This node becomes an exit for the current section.
Args:
ast_node: ast.AST
section_id: Hashable, the node for which ast_node should be considered
to be an exit node
guards: Tuple[ast.AST, ...], the finally sections that guard ast_node
Returns:
Node |
884 | 882 | add_continue_node | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 455 | method | Grows the graph by adding a reentry node.
This node causes control flow to go back to the loop section's entry.
Args:
ast_node: ast.AST
section_id: Hashable, the node for which ast_node should be considered
to be an exit node
guards: Tuple[ast.AST, ...], the finally sections that guard ast_node |
885 | 883 | connect_raise_node | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 469 | method | Adds extra connection between a raise node and containing except guards.
The node is a graph node, not an ast node.
Args:
node: Node
except_guards: Tuple[ast.AST, ...], the except sections that guard node |
886 | 884 | enter_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 484 | method | Enters a regular section.
Regular sections admit exit jumps, which end the section.
Args:
section_id: Hashable, the same node that will be used in calls to the
ast_node arg passed to add_exit_node |
887 | 885 | exit_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 496 | method | Exits a regular section. |
888 | 886 | enter_loop_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 505 | method | Enters a loop section.
Loop sections define an entry node. The end of the section always flows back
to the entry node. These admit continue jump nodes which also flow to the
entry node.
Args:
section_id: Hashable, the same node that will be used in calls to the
ast_node arg passed to add_continue_node
entry_node: ast.AST, the entry node into the loop (e.g. the test node
for while loops) |
889 | 887 | exit_loop_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 524 | method | Exits a loop section. |
890 | 888 | enter_cond_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 539 | method | Enters a conditional section.
Conditional sections define an entry node, and one or more branches.
Args:
section_id: Hashable, the same node that will be used in calls to the
section_id arg passed to new_cond_branch |
891 | 889 | new_cond_branch | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 553 | method | Begins a new branch in a cond section. |
892 | 890 | exit_cond_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 567 | method | Exits a conditional section. |
893 | 891 | enter_except_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 574 | method | Enters an except section. |
894 | 892 | enter_finally_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 579 | method | Enters a finally section. |
895 | 893 | exit_finally_section | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 589 | method | Exits a finally section. |
896 | 894 | build | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 599 | method | Returns the CFG accumulated so far and resets the builder.
Returns:
Graph |
897 | 895 | AstToCfg | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 647 | class | Converts an AST to CFGs.
A separate CFG will be constructed for each function. |
898 | 896 | visit_ClassDef | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 714 | method | |
899 | 897 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 767 | method | |
900 | 898 | visit_Lambda | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 770 | method | |
901 | 899 | visit_Return | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 773 | method | |
902 | 900 | visit_Import | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 776 | method | |
903 | 901 | visit_ImportFrom | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 779 | method | |
904 | 902 | visit_Expr | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 782 | method | |
905 | 903 | visit_Assign | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 785 | method | |
906 | 904 | visit_AnnAssign | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 788 | method | |
907 | 905 | visit_AugAssign | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 791 | method | |
908 | 906 | visit_Pass | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 794 | method | |
909 | 907 | visit_Global | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 797 | method | |
910 | 908 | visit_Nonlocal | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 800 | method | |
911 | 909 | visit_Print | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 803 | method | |
912 | 910 | visit_Raise | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 806 | method | |
913 | 911 | visit_Assert | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 811 | method | |
914 | 912 | visit_Delete | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 815 | method | |
915 | 913 | visit_If | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 818 | method | |
916 | 914 | visit_While | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 840 | method | |
917 | 915 | visit_For | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 863 | method | |
918 | 916 | visit_Break | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 894 | method | |
919 | 917 | visit_Continue | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 897 | method | |
920 | 918 | visit_ExceptHandler | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 900 | method | |
921 | 919 | visit_Try | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 914 | method | |
922 | 920 | visit_With | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 956 | method | |
923 | 921 | build | tensorflow/tensorflow/python/autograph/pyct/cfg.py | 964 | function | |
924 | 922 | CountingVisitor | tensorflow/tensorflow/python/autograph/pyct/cfg_test.py | 28 | class | |
925 | 923 | init_state | tensorflow/tensorflow/python/autograph/pyct/cfg_test.py | 34 | method | |
926 | 924 | visit_node | tensorflow/tensorflow/python/autograph/pyct/cfg_test.py | 37 | method | |
927 | 925 | FrameInfo | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 26 | class | |
928 | 926 | MultilineMessageKeyError | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 141 | class | |
929 | 927 | ErrorMetadataBase | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 153 | class | Container objects attached to exceptions raised in user code.
This metadata allows re-raising exceptions that occur in generated code, with
a custom error message that includes a stack trace relative to user-readable
code from which the generated code originated. |
930 | 928 | get_message | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 177 | method | Returns the message for the underlying exception. |
931 | 929 | create_exception | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 211 | method | |
932 | 930 | to_exception | tensorflow/tensorflow/python/autograph/pyct/error_utils.py | 221 | method | |
933 | 931 | PyCTError | tensorflow/tensorflow/python/autograph/pyct/errors.py | 22 | class | Base class for all exceptions. |
934 | 932 | UnsupportedLanguageElementError | tensorflow/tensorflow/python/autograph/pyct/errors.py | 27 | class | Raised for code patterns that AutoGraph does not support. |
935 | 933 | is_literal | tensorflow/tensorflow/python/autograph/pyct/gast_util.py | 40 | function | Tests whether node represents a Python literal. |
936 | 934 | islambda | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 60 | function | |
937 | 935 | isnamedtuple | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 68 | function | Returns True if the argument is a namedtuple-like. |
938 | 936 | isbuiltin | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 82 | function | Returns True if the argument is a built-in function. |
939 | 937 | isconstructor | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 96 | function | Returns True if the argument is an object constructor.
In general, any object of type class is a constructor, with the exception
of classes created using a callable metaclass.
See below for why a callable metaclass is not a trivial combination:
https://docs.python.org/2.7/reference/datamodel.html#customizing-class-creation
Args:
cls: Any
Returns:
Bool |
940 | 938 | getimmediatesource | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 143 | function | A variant of inspect.getsource that ignores the __wrapped__ property. |
941 | 939 | getnamespace | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 151 | function | Returns the complete namespace of a function.
Namespace is defined here as the mapping of all non-local variables to values.
This includes the globals and the closure variables. Note that this captures
the entire globals collection of the function, and may contain extra symbols
that it does not actually use.
Args:
f: User defined function.
Returns:
A dict mapping symbol names to values. |
942 | 940 | getqualifiedname | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 177 | function | Returns the name by which a value can be referred to in a given namespace.
If the object defines a parent module, the function attempts to use it to
locate the object.
This function will recurse inside modules, but it will not search objects for
attributes. The recursion depth is controlled by max_depth.
Args:
namespace: Dict[str, Any], the namespace to search into.
object_: Any, the value to search.
max_depth: Optional[int], a limit to the recursion depth when searching
inside modules.
visited: Optional[Set[int]], ID of modules to avoid visiting.
Returns: Union[str, None], the fully-qualified name that resolves to the value
o, or None if it couldn't be found. |
943 | 941 | getdefiningclass | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 250 | function | Resolves the class (e.g. one of the superclasses) that defined a method. |
944 | 942 | getmethodclass | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 265 | function | Resolves a function's owner, e.g. a method's class.
Note that this returns the object that the function was retrieved from, not
necessarily the class where it was defined.
This function relies on Python stack frame support in the interpreter, and
has the same limitations that inspect.currentframe.
Limitations. This function will only work correctly if the owned class is
visible in the caller's global or local variables.
Args:
m: A user defined function
Returns:
The class that this function was retrieved from, or None if the function
is not an object or class method, or the class that owns the object or
method is not visible to m.
Raises:
ValueError: if the class could not be resolved for any unexpected reason. |
945 | 943 | getfutureimports | tensorflow/tensorflow/python/autograph/pyct/inspect_utils.py | 339 | function | Detects what future imports are necessary to safely execute entity source.
Args:
entity: Any object
Returns:
A tuple of future strings |
946 | 944 | decorator | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 37 | function | |
947 | 945 | function_decorator | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 41 | function | |
948 | 946 | wrapping_decorator | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 47 | function | |
949 | 947 | free_function | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 85 | function | |
950 | 948 | factory | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 89 | function | |
951 | 949 | free_factory | tensorflow/tensorflow/python/autograph/pyct/inspect_utils_test.py | 93 | function | |
952 | 950 | load_source | tensorflow/tensorflow/python/autograph/pyct/loader.py | 50 | function | Loads the given source code as a Python module. |
953 | 951 | load_ast | tensorflow/tensorflow/python/autograph/pyct/loader.py | 70 | function | Loads the given AST as a Python module.
Compiling the AST code this way ensures that the source code is readable by
e.g. `pdb` or `inspect`.
Args:
nodes: Union[ast.AST, Iterable[ast.AST]], the code to compile, as an AST
object.
indentation: Text, the string to use for indentation.
include_source_map: bool, whether return a source map.
delete_on_exit: bool, whether to delete the temporary file used for
compilation on exit.
Returns:
Tuple[module, Text, Dict[LineLocation, OriginInfo]], containing:
the module containing the unparsed nodes, the source code corresponding to
nodes, and the source map. Is include_source_map is False, the source map
will be None. |
954 | 952 | load_source | tensorflow/tensorflow/python/autograph/pyct/loader_deprecated_py2.py | 40 | function | Loads the given source code as a Python module. |
955 | 953 | load_ast | tensorflow/tensorflow/python/autograph/pyct/loader_deprecated_py2.py | 58 | function | Loads the given AST as a Python module.
Compiling the AST code this way ensures that the source code is readable by
e.g. `pdb` or `inspect`.
Args:
nodes: Union[ast.AST, Iterable[ast.AST]], the code to compile, as an AST
object.
indentation: Text, the string to use for indentation.
include_source_map: bool, whether return a source map.
delete_on_exit: bool, whether to delete the temporary file used for
compilation on exit.
Returns:
Tuple[module, Text, Dict[LineLocation, OriginInfo]], containing:
the module containing the unparsed nodes, the source code corresponding to
nodes, and the source map. Is include_source_map is False, the source map
will be None. |
956 | 954 | Namer | tensorflow/tensorflow/python/autograph/pyct/naming.py | 24 | class | Symbol name generator. |
957 | 955 | new_symbol | tensorflow/tensorflow/python/autograph/pyct/naming.py | 31 | method | See control_flow.SymbolNamer.new_symbol. |
958 | 956 | LineLocation | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 35 | class | Similar to Location, but without column information.
Attributes:
filename: Text
lineno: int, 1-based |
959 | 957 | Location | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 46 | class | Encodes code location information.
Attributes:
filename: Text
lineno: int, 1-based
col_offset: int
line_loc: LineLocation |
960 | 958 | line_loc | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 58 | method | |
961 | 959 | OriginInfo | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 62 | class | Container for information about the source code before conversion.
Attributes:
loc: Location
function_name: Optional[Text]
source_code_line: Text
comment: Optional[Text] |
962 | 960 | as_frame | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 75 | method | Returns a 4-tuple consistent with the return of traceback.extract_tb. |
963 | 961 | create_source_map | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 89 | function | Creates a source map between an annotated AST and the code it compiles to.
Note: this function assumes nodes nodes, code and filepath correspond to the
same code.
Args:
nodes: Iterable[ast.AST, ...], one or more AST modes.
code: Text, the source code in which nodes are found.
filepath: Text
Returns:
Dict[LineLocation, OriginInfo], mapping locations in code to locations
indicated by origin annotations in node. |
964 | 962 | OriginResolver | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 166 | class | Annotates an AST with additional source information like file name. |
965 | 963 | visit | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 212 | method | |
966 | 964 | resolve | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 226 | function | Adds origin information to an AST, based on the source it was loaded from.
This allows us to map the original source code line numbers to generated
source code.
Note: the AST may be a part of a larger context (e.g. a function is part of
a module that may contain other things). However, this function does not
assume the source argument contains the entire context, nor that it contains
only code corresponding to node itself. However, it assumes that node was
parsed from the given source code.
For this reason, two extra arguments are required, and they indicate the
location of the node in the original context.
Args:
node: gast.AST, the AST to annotate.
source: Text, the source code representing node.
context_filepath: Text
context_lineno: int
context_col_offset: int |
967 | 965 | resolve_entity | tensorflow/tensorflow/python/autograph/pyct/origin_info.py | 271 | function | Like resolve, but extracts the context information from an entity. |
968 | 966 | dedent_block | tensorflow/tensorflow/python/autograph/pyct/parser.py | 65 | function | Dedents a code so that its first line starts at row zero. |
969 | 967 | parse_entity | tensorflow/tensorflow/python/autograph/pyct/parser.py | 133 | function | Returns the AST and source code of given entity.
Args:
entity: Any, Python function/method/class
future_features: Iterable[Text], future features to use (e.g.
'print_statement'). See
https://docs.python.org/2/reference/simple_stmts.html#future
Returns:
gast.AST, Text: the parsed AST node; the source code that was parsed to
generate the AST (including any prefixes that this function may have added). |
970 | 968 | parse | tensorflow/tensorflow/python/autograph/pyct/parser.py | 323 | function | Returns the AST of given piece of code.
Args:
src: Text
preamble_len: Int, indicates leading nodes in the parsed AST which should be
dropped.
single_node: Bool, whether `src` is assumed to be represented by exactly one
AST node.
Returns:
ast.AST |
971 | 969 | parse_expression | tensorflow/tensorflow/python/autograph/pyct/parser.py | 347 | function | Returns the AST of given identifier.
Args:
src: A piece of code that represents a single Python expression
Returns:
A gast.AST object.
Raises:
ValueError: if src does not consist of a single Expression. |
972 | 970 | unparse | tensorflow/tensorflow/python/autograph/pyct/parser.py | 366 | function | Returns the source code of given AST.
Args:
node: The code to compile, as an AST object.
indentation: Unused, deprecated. The returning code will always be indented
at 4 spaces.
include_encoding_marker: Bool, thether to include a comment on the first
line to explicitly specify UTF-8 encoding.
Returns:
code: The source code generated from the AST object
source_mapping: A mapping between the user and AutoGraph generated code. |
973 | 971 | PrettyPrinter | tensorflow/tensorflow/python/autograph/pyct/pretty_printer.py | 26 | class | Print AST nodes. |
974 | 972 | generic_visit | tensorflow/tensorflow/python/autograph/pyct/pretty_printer.py | 59 | method | |
975 | 973 | fmt | tensorflow/tensorflow/python/autograph/pyct/pretty_printer.py | 128 | function | |
976 | 974 | CallerMustSetThis | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 36 | class | |
977 | 975 | Symbol | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 40 | class | Represents a Python symbol. |
978 | 976 | Literal | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 44 | class | Represents a Python numeric literal. |
979 | 977 | QN | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 57 | class | Represents a qualified name. |
980 | 978 | is_symbol | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 95 | method | |
981 | 979 | is_simple | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 98 | method | |
982 | 980 | is_composite | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 101 | method | |
983 | 981 | has_subscript | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 104 | method | |
984 | 982 | has_attr | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 107 | method | |
985 | 983 | parent | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 111 | method | |
986 | 984 | owner_set | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 117 | method | Returns all the symbols (simple or composite) that own this QN.
In other words, if this symbol was modified, the symbols in the owner set
may also be affected.
Examples:
'a.b[c.d]' has two owners, 'a' and 'a.b' |
987 | 985 | support_set | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 133 | method | Returns the set of simple symbols that this QN relies on.
This would be the smallest set of symbols necessary for the QN to
statically resolve (assuming properties and index ranges are verified
at runtime).
Examples:
'a.b' has only one support symbol, 'a'
'a[i]' has two support symbols, 'a' and 'i' |
988 | 986 | ssf | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 175 | method | Simple symbol form. |
989 | 987 | ast | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 187 | method | AST representation. |
990 | 988 | QnResolver | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 210 | class | Annotates nodes with QN information.
Note: Not using NodeAnnos to avoid circular dependencies. |
991 | 989 | visit_Name | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 216 | method | |
992 | 990 | visit_Attribute | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 221 | method | |
993 | 991 | visit_Subscript | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 228 | method | |
994 | 992 | resolve | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 251 | function | |
995 | 993 | from_str | tensorflow/tensorflow/python/autograph/pyct/qual_names.py | 255 | function | |
996 | 994 | ContextAdjuster | tensorflow/tensorflow/python/autograph/pyct/templates.py | 35 | class | Adjusts the ctx field of nodes to ensure consistency.
This transformer can change the ctx fields of a variable, tuple and other
AST elements that allow one, based on whether the element is being read or
written. |
997 | 995 | visit | tensorflow/tensorflow/python/autograph/pyct/templates.py | 46 | method | |
998 | 996 | visit_Attribute | tensorflow/tensorflow/python/autograph/pyct/templates.py | 58 | method | |
999 | 997 | visit_Tuple | tensorflow/tensorflow/python/autograph/pyct/templates.py | 64 | method | |
1000 | 998 | visit_List | tensorflow/tensorflow/python/autograph/pyct/templates.py | 68 | method | |
1001 | 999 | visit_Name | tensorflow/tensorflow/python/autograph/pyct/templates.py | 72 | method | |
1002 | 1000 | visit_Call | tensorflow/tensorflow/python/autograph/pyct/templates.py | 76 | method | |
1003 | 1001 | visit_Dict | tensorflow/tensorflow/python/autograph/pyct/templates.py | 83 | method | |
1004 | 1002 | visit_Subscript | tensorflow/tensorflow/python/autograph/pyct/templates.py | 89 | method | |
1005 | 1003 | visit_comprehension | tensorflow/tensorflow/python/autograph/pyct/templates.py | 95 | method | |
1006 | 1004 | visit_Lambda | tensorflow/tensorflow/python/autograph/pyct/templates.py | 101 | method | |
1007 | 1005 | ReplaceTransformer | tensorflow/tensorflow/python/autograph/pyct/templates.py | 108 | class | Replace AST nodes. |
1008 | 1006 | visit_Expr | tensorflow/tensorflow/python/autograph/pyct/templates.py | 146 | method | |
1009 | 1007 | visit_keyword | tensorflow/tensorflow/python/autograph/pyct/templates.py | 154 | method | |
1010 | 1008 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/templates.py | 172 | method | |
1011 | 1009 | visit_Attribute | tensorflow/tensorflow/python/autograph/pyct/templates.py | 185 | method | |
1012 | 1010 | visit_Name | tensorflow/tensorflow/python/autograph/pyct/templates.py | 197 | method | |
1013 | 1011 | replace | tensorflow/tensorflow/python/autograph/pyct/templates.py | 234 | function | Replaces placeholders in a Python template.
AST Name and Tuple nodes always receive the context that inferred from
the template. However, when replacing more complex nodes (that can potentially
contain Name children), then the caller is responsible for setting the
appropriate context.
Args:
template: A string representing Python code. Any symbol name can be used
that appears in the template code can be used as placeholder.
**replacements: A mapping from placeholder names to (lists of) AST nodes
that these placeholders will be replaced by. String values are also
supported as a shorthand for AST Name nodes with the respective ID.
Returns:
An AST node or list of AST nodes with the replacements made. If the
template was a function, a list will be returned. If the template was a
node, the same node will be returned. If the template was a string, an
AST node will be returned (a `Module` node in the case of a multi-line
string, an `Expr` node otherwise).
Raises:
ValueError: if the arguments are incorrect. |
1014 | 1012 | replace_as_expression | tensorflow/tensorflow/python/autograph/pyct/templates.py | 279 | function | Variant of replace that generates expressions, instead of code blocks. |
1015 | 1013 | AnalysisLevel | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 32 | class | |
1016 | 1014 | Context | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 41 | class | Contains information about a source code transformation.
This object is mutable, and is updated during conversion. Not thread safe.
Attributes:
info: EntityInfo, immutable.
namer: naming.Namer.
current_origin: origin_info.OriginInfo, holds the OriginInfo of the last
AST node to be processed successfully. Useful for error handling.
user: An user-supplied context object. The object is opaque to the
infrastructure, but will pe passed through to all custom transformations. |
1017 | 1015 | EntityInfo | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 63 | class | Contains information about a Python entity.
Immutable.
Examples of entities include functions and classes.
Attributes:
name: The name that identifies this entity.
source_code: The entity's source code.
source_file: The entity's source file.
future_features: Tuple[Text], the future features that this entity was
compiled with. See
https://docs.python.org/2/reference/simple_stmts.html#future.
namespace: Dict[str, ], containing symbols visible to the entity (excluding
parameters). |
1018 | 1016 | NodeStateTracker | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 200 | class | Base class for general-purpose Python code transformation.
This abstract class provides helpful functions, like state tracking within
the scope of arbitrary node, helpers for processing code blocks, debugging,
mapping of transformed code to original code, and others.
Scope-local state tracking: to keep state across nodes, at the level of
(possibly nested) scopes, use enter/exit_local_scope and set/get_local.
You must call enter/exit_local_scope manually, but the transformer detects
when they are not properly paired.
The transformer allows keeping state across calls that is local
to arbitrary nodes and their descendants, using the self.state attribute.
Multiple independent scopes are allowed and automatically constructed.
For example, to keep track of the `If` node that encloses any `Name` node,
one can write:
```
class FooType(object):
def __init__(self):
self.foo_property = None
class DummyTransformer(NodeStateTracker, ast.NodeTransformer):
def visit_If(self, node):
self.state[FooType].enter()
self.state[FooType].foo_property = node
node = self.veneric_visit(node)
self.state[FooType].exit()
return node
def visit_Name(self, node):
self.state[FooType].foo_property # will hold the innermost enclosing if
```
Alternatively, the `enter()`/`exit()` calls can be managed by a `with`
statement:
```
def visit_If(self, node):
with self.state[FooType] as foo:
foo.foo_property = node
return self.generic_visit(node)
``` |
1019 | 1017 | debug_print | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 270 | method | Helper method useful for debugging. Prints the AST. |
1020 | 1018 | debug_print_src | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 276 | method | Helper method useful for debugging. Prints the AST as code. |
1021 | 1019 | visit_block | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 282 | method | A more powerful version of generic_visit for statement blocks.
An example of a block is the body of an if statement.
This function allows specifying a postprocessing callback (the
after_visit argument) argument which can be used to move nodes to a new
destination. This is done by after_visit by returning a non-null
second return value, e.g. return new_node, new_destination.
For example, a transformer could perform the following move:
foo()
bar()
baz()
foo()
if cond:
bar()
baz()
The above could be done with a postprocessor of this kind:
def after_visit(node):
if node_is_function_call(bar):
new_container_node = build_cond()
new_container_node.body.append(node)
return new_container_node, new_container_node.body
else:
# Once we set a new destination, all subsequent items will be
# moved to it, so we don't need to explicitly handle baz.
return node, None
Args:
nodes: enumerable of AST node objects. If None, the function returns None.
before_visit: optional callable that is called before visiting each item
in nodes
after_visit: optional callable that takes in an AST node and returns a
tuple (new_node, new_destination). It is called after visiting each item
in nodes. Is used in the same was as the
visit_* methods: new_node will replace the node; if not None,
new_destination must be a list, and subsequent nodes will be placed
in this list instead of the list returned by visit_block.
Returns:
A list of AST node objects containing the transformed items fron nodes,
except those nodes that have been relocated using after_visit. |
1022 | 1020 | Base | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 360 | class | Base class for general-purpose Python-to-Python code transformation.
This is an extension of ast.NodeTransformer that provides the additional
functions offered by NodeStateTracker. |
1023 | 1021 | create_assignment | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 367 | method | |
1024 | 1022 | apply_to_single_assignments | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 374 | method | Applies a function to each individual assignment.
This function can process a possibly-unpacked (e.g. a, b = c, d) assignment.
It tries to break down the unpacking if possible. In effect, it has the same
effect as passing the assigned values in SSA form to apply_fn.
Examples:
The following will result in apply_fn(a, c), apply_fn(b, d):
a, b = c, d
The following will result in apply_fn(a, c[0]), apply_fn(b, c[1]):
a, b = c
The following will result in apply_fn(a, (b, c)):
a = b, c
It uses the visitor pattern to allow subclasses to process single
assignments individually.
Args:
targets: list, tuple of or individual AST node. Should be used with the
targets field of an ast.Assign node.
values: an AST node.
apply_fn: a function of a single argument, which will be called with the
respective nodes of each single assignment. The signature is
apply_fn(target, value), no return value. |
1025 | 1023 | visit | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 421 | method | |
1026 | 1024 | CodeGenerator | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 478 | class | Base class for general-purpose Python-to-string code transformation.
Similar to Base, but outputs arbitrary strings instead of a Python AST.
This uses the same visitor mechanism that the standard NodeVisitor uses,
meaning that subclasses write handlers for the different kinds of nodes.
New code is generated using the emit method, which appends to a code buffer
that can be afterwards obtained from code_buffer.
Example:
class SimpleCodeGen(CodeGenerator):
def visitIf(self, node):
self.emit('if ')
self.visit(node.test)
self.emit(' { ')
self.visit(node.body)
self.emit(' } else { ')
self.visit(node.orelse)
self.emit(' } ')
node = ast.parse(...)
gen = SimpleCodeGen()
gen.visit(node)
# gen.code_buffer contains the resulting code |
1027 | 1025 | emit | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 513 | method | |
1028 | 1026 | code_buffer | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 517 | method | |
1029 | 1027 | visit | tensorflow/tensorflow/python/autograph/pyct/transformer.py | 520 | method | |
1030 | 1028 | GenericTranspiler | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 227 | class | A generic transpiler for Python functions.
Its interface is the `transform` API, which can process Python function
objects. Internally, it handles parsing.
Users typically subclass this, customizing the `transform_ast` method. The
output of transformed_ast is returned directly by `transform`. Existing
methods like `transform_function` may also be overloaded.
Example:
class MyTransformer(GenericTranspiler):
def transform_ast(self, node, ctx):
result = <<transform node>>
return result
transformer = MyTransfomer()
result = transformer.transform(f, ...)
# result is the output |
1031 | 1029 | get_transformed_name | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 251 | method | Returns a name for the output function. Subclasses may override this. |
1032 | 1030 | transform_ast | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 259 | method | Performs an actual transformation of a function's AST.
Subclasses must implement this method, and do not usually call it.
Args:
node: One or more ast.AST nodes representing the AST to be transformed.
ctx: transformer.Context. |
1033 | 1031 | transform | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 270 | method | Transforms a Python object.
Users typically call this method.
Args:
obj: A Python object, function, type, etc.
user_context: An opaque object (may be None) that is forwarded to
transform_ast, through the ctx.user_context argument.
Returns:
Tre result of calling transform_function.
Raises:
NotImplementedError: if the type of obj is not handled. |
1034 | 1032 | transform_module | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 300 | method | Transforms a module.
Subclasses may override this method. The return value is opaque.
The method receives the original AST. The result is passed as-is to the
output of `transform`.
Args:
mod: A Python module.
user_context: An opaque object (may be None) that is forwarded to
transform_ast, through the ctx.user_context argument.
Returns:
List[Tuple[Any, Any]]. By default it returns the output of transform_ast,
evaluated on each supported member, other than modules, together with a
`transformer.Context` containing information about the transformation
process. |
1035 | 1033 | transform_function | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 328 | method | Transforms a function.
Subclasses may override this method. The return value is opaque.
The method receives the original AST. The result is passed as-is to the
output of `transform`.
Args:
fn: A function or lambda.
user_context: An opaque object (may be None) that is forwarded to
transform_ast, through the ctx.user_context argument.
Returns:
Tuple[Any, Any]. By default it returns the output of transform_ast,
together with a `transformer.Context` containing information about the
transformation process. |
1036 | 1034 | PyToPy | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 368 | class | A generic Python-to-Python transpiler.
Its `transform` method offers a function-in, function-out interface.
Internally, it takes care of parsing, caching and loading of the translated
code.
Users typically subclass this, overriding `transform_ast`.
Usually, instances of this class are singletons, since each instance manages
its own cache. The caching can be controlled by overriding `get_caching_key`.
Example:
class MyTransformer(PyToPy):
def transform_ast(self, node, ctx):
node = <<transform node, usually using ast.NodeTransformer classes>>
return node
transformer = MyTransfomer()
new_f, module, source_map = transformer.transform_function(f, ...)
# new_f is a function with signature identical to f
The transformed function has access to the same namespace as the original
function. To allow access to internal APIs, users may inject additional
symbols by overriding `get_extra_locals`. |
1037 | 1035 | get_extra_locals | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 402 | method | Returns extra static local variables to be made to transformed code.
Subclasses must override this.
Returns:
extra_locals: A Dict[Text, Any] containing additional variables to make
available to the transformed code. |
1038 | 1036 | get_caching_key | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 413 | method | Returns a unique key to use for caching.
Subclasses must override this.
Calls made to `transform_function` with functions that have the same code
object and caching key will return a cached instance on subsequent
invocations.
Args:
user_context: The context object which was passed to `transform`.
Returns:
extra_locals: A hashable. |
1039 | 1037 | transform_function | tensorflow/tensorflow/python/autograph/pyct/transpiler.py | 436 | method | Transforms a function. See GenericTranspiler.trasnform_function.
This overload wraps the parent's `transform_function`, adding caching and
facilities to instantiate the output as a Python object. It also
adds facilities to make new symbols available to the generated Python code,
visible as local variables - see `get_extra_locals`.
Args:
fn: A function or lambda.
user_context: An opaque object (may be None) that is forwarded to
transform_ast, through the ctx.user_context argument.
Returns:
A tuple:
* A function or lambda with the same signature and closure as `fn`
* The temporary module into which the transformed function was loaded
* The source map as a
Dict[origin_info.LineLocation, origin_info.OriginInfo] |
1040 | 1038 | FlipSignTransformer | tensorflow/tensorflow/python/autograph/pyct/transpiler_test.py | 30 | class | |
1041 | 1039 | visit_BinOp | tensorflow/tensorflow/python/autograph/pyct/transpiler_test.py | 32 | method | |
1042 | 1040 | DummyGensym | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 40 | class | A dumb gensym that suffixes a stem by sequential numbers from 1000. |
1043 | 1041 | new_name | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 50 | method | |
1044 | 1042 | ASTEdgePattern | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 60 | class | A pattern defining a type of AST edge.
This consists of three components:
- The type of the parent node, checked with isinstance,
- The name of the field, checked with string equality, and
- The type of the child node, also checked with isinstance.
If all three match, the whole pattern is considered to match.
In all three slots, the special value `anf.ANY` is treated as "match
anything". The internal nodes are produced from the `gast` library rather
than the standard `ast` module, which may affect `isinstance` checks. |
1045 | 1043 | matches | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 76 | method | Computes whether this pattern matches the given edge. |
1046 | 1044 | AnfTransformer | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 89 | class | Performs the conversion to A-normal form (ANF). |
1047 | 1045 | visit_Return | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 256 | method | |
1048 | 1046 | visit_Delete | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 259 | method | |
1049 | 1047 | visit_Assign | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 262 | method | |
1050 | 1048 | visit_AugAssign | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 265 | method | |
1051 | 1049 | visit_Print | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 268 | method | |
1052 | 1050 | visit_For | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 271 | method | |
1053 | 1051 | visit_AsyncFor | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 290 | method | |
1054 | 1052 | visit_While | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 295 | method | |
1055 | 1053 | visit_If | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 307 | method | |
1056 | 1054 | visit_With | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 324 | method | |
1057 | 1055 | visit_AsyncWith | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 343 | method | |
1058 | 1056 | visit_Raise | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 348 | method | |
1059 | 1057 | visit_Assert | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 353 | method | |
1060 | 1058 | visit_Exec | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 361 | method | |
1061 | 1059 | visit_Expr | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 366 | method | |
1062 | 1060 | visit_BoolOp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 371 | method | |
1063 | 1061 | visit_BinOp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 376 | method | |
1064 | 1062 | visit_UnaryOp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 379 | method | |
1065 | 1063 | visit_Lambda | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 382 | method | |
1066 | 1064 | visit_IfExp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 387 | method | |
1067 | 1065 | visit_Dict | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 393 | method | |
1068 | 1066 | visit_Set | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 396 | method | |
1069 | 1067 | visit_ListComp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 399 | method | |
1070 | 1068 | visit_SetComp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 405 | method | |
1071 | 1069 | visit_DictComp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 411 | method | |
1072 | 1070 | visit_GeneratorExp | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 417 | method | |
1073 | 1071 | visit_Await | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 423 | method | |
1074 | 1072 | visit_Yield | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 428 | method | |
1075 | 1073 | visit_YieldFrom | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 431 | method | |
1076 | 1074 | visit_Compare | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 436 | method | |
1077 | 1075 | visit_Call | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 443 | method | |
1078 | 1076 | visit_Repr | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 446 | method | |
1079 | 1077 | visit_FormattedValue | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 451 | method | |
1080 | 1078 | visit_JoinedStr | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 456 | method | |
1081 | 1079 | visit_Attribute | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 461 | method | |
1082 | 1080 | visit_Subscript | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 464 | method | |
1083 | 1081 | visit_List | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 470 | method | |
1084 | 1082 | visit_Tuple | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 476 | method | |
1085 | 1083 | transform | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf.py | 527 | function | Converts the given node to A-normal form (ANF).
The general idea of A-normal form: https://en.wikipedia.org/wiki/A-normal_form
The specific converters used here are based on Python AST semantics as
documented at https://greentreesnakes.readthedocs.io/en/latest/.
What exactly should be considered A-normal form for any given programming
language is not completely obvious. The transformation defined here is
therefore configurable as to which syntax to replace with a fresh variable and
which to leave be. The configuration is intentionally flexible enough to
define very precise variable insertion transformations, should that be
desired.
The configuration is a list of syntax rules, each of which is a 2-tuple:
- An `ASTEdgePattern` (which see) defining a type of AST edge, and
- Whether to transform children of such edges.
The special object `anf.ANY` may be used as a pattern that matches all edges.
Each replacement directive is one of three possible things:
- The object `anf.REPLACE`, meaning "Replace this child node with a variable",
- The object `anf.LEAVE`, meaning "Do not replace this child node with a
variable", or
- A Python callable. If a callable, it is called with the parent node, the
field name, and the child node, and must compute a boolean indicating
whether to transform the child node or not. The callable is free to use
whatever context information it chooses. The callable may be invoked more
than once on the same link, and must produce the same answer each time.
The syntax rules are tested in order, and the first match governs. If no rule
matches, the node is not transformed.
The above rules notwithstanding,
- Variable references are never replaced with (fresh) variables, as that would
accomplish nothing.
- The left-hand children of Assign and AugAssign nodes, and the children of
Del nodes, are never replaced with variables, as that would break their
semantics.
- The right-hand children of Assign nodes are never replaced with variables,
as the original assignment would still have to be present in the result
to define the new variable. (That is, there's no point in transforming
`x = sin(y)` into `tmp = sin(y); x = tmp`.)
- The right-hand children of AugAssign nodes are never replaced with variables
either, but only because the difference from Assign was considered a
potential source of confusion (and it would have been slightly awkward in
the code to treat the RHS differently than the LHS).
- Various special-purpose AST nodes are not exposed to the configuration, lest
the transform produce invalid syntax like, e.g., `tmp = +; x = 1 tmp 2`.
For example, the configuration
```python
[(anf.ASTEdgePattern(anf.ANY, anf.ANY, gast.expr), anf.REPLACE)]
```
gives explicit fresh names to all expressions regardless of context (except as
outlined above), whereas
```python
[(anf.ASTEdgePattern(gast.If, "test", anf.ANY), anf.REPLACE)]
```
only transforms the conditionals of `if` statements (but not, e.g., `while`).
If no configuration is supplied, the default behavior is to transform all
expressions except literal constants, which is defined as a configuration as
```python
# For Python 3, and gast library versions before 0.3
literals = (gast.Num, gast.Str, gast.Bytes, gast.NameConstant)
[(anf.ASTEdgePattern(anf.ANY, anf.ANY, literals), anf.LEAVE),
(anf.ASTEdgePattern(anf.ANY, anf.ANY, gast.expr), anf.REPLACE)]
```
Args:
node: The node to transform.
ctx: transformer.EntityInfo. TODO(mdan): What information does this
argument provide?
config: Optional ANF configuration. If omitted, ANF replaces all expression
expect literal constants. |
1086 | 1084 | exec_expected_result | tensorflow/tensorflow/python/autograph/pyct/common_transformers/anf_test.py | 40 | function | |
1087 | 1085 | Scope | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 36 | class | Encloses local symbol definition and usage information.
This can track for instance whether a symbol is modified in the current scope.
Note that scopes do not necessarily align with Python's scopes. For example,
the body of an if statement may be considered a separate scope.
Caution - the AST references held by this object are weak.
Scope objects are mutable during construction only, and must be frozen using
`Scope.finalize()` before use. Furthermore, a scope is consistent only after
all its children have been frozen. While analysing code blocks, scopes are
being gradually built, from the innermost scope outward. Freezing indicates
that the analysis of a code block is complete. Once frozen, mutation is no
longer allowed. `is_final` tracks whether the scope is frozen or not. Certain
properties, like `referenced`, are only accurate when called on frozen scopes.
Attributes:
parent: Optional[Scope], the parent scope, if any.
isolated: bool, whether the scope is a true Python scope (e.g. the scope of
a function), or just a surrogate tracking an ordinary code block. Using
the terminology of the Python 3 reference documentation, True roughly
represents an actual scope, whereas False represents an ordinary code
block.
function_name: Optional[str], name of the function owning this scope.
isolated_names: Set[qual_names.QN], identifiers that are isolated to this
scope (even if the scope is not isolated).
annotations: Set[qual_names.QN], identifiers used as type annotations
in this scope.
read: Set[qual_names.QN], identifiers read in this scope.
modified: Set[qual_names.QN], identifiers modified in this scope.
deleted: Set[qual_names.QN], identifiers deleted in this scope.
bound: Set[qual_names.QN], names that are bound to this scope. See
https://docs.python.org/3/reference/executionmodel.html#binding-of-names
for a precise definition.
globals: Set[qual_names.QN], names that are explicitly marked as global in
this scope. Note that this doesn't include free read-only vars bound to
global symbols.
nonlocals: Set[qual_names.QN], names that are explicitly marked as nonlocal
in this scope. Note that this doesn't include free read-only vars bound to
global symbols.
free_vars: Set[qual_names.QN], the free variables in this scope. See
https://docs.python.org/3/reference/executionmodel.html for a precise
definition.
params: WeakValueDictionary[qual_names.QN, ast.Node], function arguments
visible in this scope, mapped to the function node that defines them.
enclosing_scope: Scope, the innermost isolated scope that is a transitive
parent of this scope. May be the scope itself.
referenced: Set[qual_names.QN], the totality of the symbols used by this
scope and its parents.
is_final: bool, whether the scope is frozen or not.
Note - simple statements may never delete and modify a symbol at the same
time. However, compound ones like if statements can. In that latter case, it's
undefined whether the symbol is actually modified or deleted upon statement
exit. Certain analyses like reaching definitions need to be careful about
this. |
1088 | 1086 | enclosing_scope | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 130 | method | |
1089 | 1087 | referenced | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 137 | method | |
1090 | 1088 | free_vars | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 143 | method | |
1091 | 1089 | copy_from | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 147 | method | Recursively copies the contents of this scope from another scope. |
1092 | 1090 | copy_of | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 162 | method | |
1093 | 1091 | merge_from | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 172 | method | Adds all activity from another scope to this scope. |
1094 | 1092 | finalize | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 185 | method | Freezes this scope. |
1095 | 1093 | mark_param | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 207 | method | |
1096 | 1094 | ActivityAnalyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 230 | class | Annotates nodes with local scope information.
See Scope.
The use of this class requires that qual_names.resolve() has been called on
the node. This class will ignore nodes have not been
annotated with their qualified names. |
1097 | 1095 | visit_Import | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 353 | method | |
1098 | 1096 | visit_ImportFrom | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 356 | method | |
1099 | 1097 | visit_Global | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 359 | method | |
1100 | 1098 | visit_Nonlocal | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 368 | method | |
1101 | 1099 | visit_Expr | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 378 | method | |
1102 | 1100 | visit_Raise | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 381 | method | |
1103 | 1101 | visit_Return | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 384 | method | |
1104 | 1102 | visit_Assign | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 387 | method | |
1105 | 1103 | visit_AnnAssign | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 390 | method | |
1106 | 1104 | visit_AugAssign | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 399 | method | |
1107 | 1105 | visit_Delete | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 413 | method | |
1108 | 1106 | visit_Name | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 416 | method | |
1109 | 1107 | visit_alias | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 422 | method | |
1110 | 1108 | visit_Attribute | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 435 | method | |
1111 | 1109 | visit_Subscript | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 443 | method | |
1112 | 1110 | visit_Print | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 450 | method | |
1113 | 1111 | visit_Assert | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 457 | method | |
1114 | 1112 | visit_Call | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 460 | method | |
1115 | 1113 | visit_comprehension | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 509 | method | |
1116 | 1114 | visit_DictComp | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 516 | method | |
1117 | 1115 | visit_ListComp | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 519 | method | |
1118 | 1116 | visit_SetComp | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 522 | method | |
1119 | 1117 | visit_GeneratorExp | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 525 | method | |
1120 | 1118 | visit_ClassDef | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 528 | method | |
1121 | 1119 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 568 | method | |
1122 | 1120 | visit_Lambda | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 606 | method | |
1123 | 1121 | visit_With | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 648 | method | |
1124 | 1122 | visit_withitem | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 654 | method | |
1125 | 1123 | visit_If | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 657 | method | |
1126 | 1124 | visit_For | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 668 | method | |
1127 | 1125 | visit_While | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 685 | method | |
1128 | 1126 | visit_ExceptHandler | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 696 | method | |
1129 | 1127 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/activity.py | 707 | function | |
1130 | 1128 | NoValue | tensorflow/tensorflow/python/autograph/pyct/static_analysis/annos.py | 27 | class | |
1131 | 1129 | NodeAnno | tensorflow/tensorflow/python/autograph/pyct/static_analysis/annos.py | 33 | class | Additional annotations used by the static analyzer.
These are in addition to the basic annotations declared in anno.py. |
1132 | 1130 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 40 | class | CFG visitor that performs liveness analysis at statement level. |
1133 | 1131 | init_state | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 47 | method | |
1134 | 1132 | visit_node | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 50 | method | |
1135 | 1133 | TreeAnnotator | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 96 | class | Runs liveness analysis on each of the functions defined in the AST.
If a function defined other local functions, those will have separate CFGs.
However, dataflow analysis needs to tie up these CFGs to properly emulate the
effect of closures. In the case of liveness, the parent function's live
variables must account for the variables that are live at the entry of each
subfunction. For example:
def foo():
# baz is live from here on
def bar():
print(baz)
This analyzer runs liveness analysis on each individual function, accounting
for the effect above. |
1136 | 1134 | visit | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 121 | method | |
1137 | 1135 | visit_Lambda | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 142 | method | |
1138 | 1136 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 145 | method | |
1139 | 1137 | visit_If | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 168 | method | |
1140 | 1138 | visit_For | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 173 | method | |
1141 | 1139 | visit_While | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 178 | method | |
1142 | 1140 | visit_Try | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 183 | method | |
1143 | 1141 | visit_ExceptHandler | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 188 | method | |
1144 | 1142 | visit_With | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 193 | method | |
1145 | 1143 | visit_Expr | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 197 | method | |
1146 | 1144 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/liveness.py | 206 | function | Resolves the live symbols at the exit of control flow statements.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
include_annotations: Bool, whether type annotations should be included in
the analysis.
Returns:
ast.AST |
1147 | 1145 | Definition | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 40 | class | Definition objects describe a unique definition of a variable.
Subclasses of this may be used by passing an appropriate factory function to
resolve.
Attributes:
param_of: Optional[ast.AST]
directives: Dict, optional definition annotations |
1148 | 1146 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 112 | class | CFG visitor that determines reaching definitions at statement level. |
1149 | 1147 | init_state | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 120 | method | |
1150 | 1148 | visit_node | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 123 | method | |
1151 | 1149 | TreeAnnotator | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 169 | class | AST visitor that annotates each symbol name with its reaching definitions.
Simultaneously, the visitor runs the dataflow analysis on each function node,
accounting for the effect of closures. For example:
def foo():
bar = 1
def baz():
# bar = 1 reaches here |
1152 | 1150 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 189 | method | |
1153 | 1151 | visit_Name | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 204 | method | |
1154 | 1152 | visit_If | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 233 | method | |
1155 | 1153 | visit_For | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 237 | method | |
1156 | 1154 | visit_While | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 253 | method | |
1157 | 1155 | visit_Try | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 257 | method | |
1158 | 1156 | visit_ExceptHandler | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 261 | method | |
1159 | 1157 | visit | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 267 | method | |
1160 | 1158 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_definitions.py | 279 | function | Resolves reaching definitions for each symbol.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
definition_factory: Callable[[], Definition]
Returns:
ast.AST |
1161 | 1159 | Definition | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 32 | class | Definition objects describe a unique definition of a function. |
1162 | 1160 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 76 | class | CFG visitor that determines reaching definitions at statement level. |
1163 | 1161 | init_state | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 85 | method | |
1164 | 1162 | visit_node | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 88 | method | |
1165 | 1163 | TreeAnnotator | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 109 | class | AST visitor that annotates each symbol name with its reaching definitions.
Simultaneously, the visitor runs the dataflow analysis on each function node,
accounting for the effect of closures. For example:
def foo():
def f():
pass
def g():
# `def f` reaches here |
1166 | 1164 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 147 | method | |
1167 | 1165 | visit_Lambda | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 150 | method | |
1168 | 1166 | visit | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 153 | method | |
1169 | 1167 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/reaching_fndefs.py | 170 | function | Resolves reaching definitions for each symbol.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
Returns:
ast.AST |
1170 | 1168 | Resolver | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 41 | class | Resolver objects handle the process of looking up actual names and types.
All resolve_* methods:
* have a first namespace argument, mapping string to actual values
* specify names as QN objects
* specify types as a Set of inferred types
All resolve_* methods must return either:
* a set of `type` objects
* None |
1171 | 1169 | res_name | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 54 | method | Resolves the type an external (e.g. closure, global) variable. |
1172 | 1170 | res_value | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 58 | method | Resolves the type a literal value. |
1173 | 1171 | res_call | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 63 | method | Resolves the return type an external function or method call.
Args:
ns: namespace
name: str, the function name
target: if this is a method call, the types of the method target, None
otherwise
args: list or argument types
keywords: dict of name to argument types
starargs: list of types of the *args arguments (should be at most one)
kwargs: list of types of the **kwargs arguments (in order of appearance) |
1174 | 1172 | res_arg | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 78 | method | Resolves the type of a (possibly annotated) function argument. |
1175 | 1173 | StmtInferrer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 162 | class | Runs type inference on a single AST statement.
This visitor annotates most nodes with type information. It also sets types
for the symbols modified by this statement in its types_out property. |
1176 | 1174 | visit | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 178 | method | |
1177 | 1175 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 185 | method | |
1178 | 1176 | visit_Constant | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 189 | method | |
1179 | 1177 | visit_Tuple | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 192 | method | |
1180 | 1178 | visit_List | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 203 | method | |
1181 | 1179 | visit_Set | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 212 | method | |
1182 | 1180 | visit_Name | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 215 | method | |
1183 | 1181 | visit_Call | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 244 | method | |
1184 | 1182 | visit_Index | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 273 | method | |
1185 | 1183 | visit_Assign | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 276 | method | |
1186 | 1184 | visit_Subscript | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 284 | method | |
1187 | 1185 | visit_Compare | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 294 | method | |
1188 | 1186 | visit_BinOp | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 316 | method | |
1189 | 1187 | Analyzer | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 329 | class | CFG visitor that propagates type information across statements. |
1190 | 1188 | init_state | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 348 | method | |
1191 | 1189 | visit_node | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 364 | method | |
1192 | 1190 | FunctionVisitor | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 394 | class | AST visitor that applies type inference to each function separately. |
1193 | 1191 | visit_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 402 | method | |
1194 | 1192 | resolve | tensorflow/tensorflow/python/autograph/pyct/static_analysis/type_inference.py | 417 | function | Performs type inference.
Args:
node: ast.AST
source_info: transformer.SourceInfo
graphs: Dict[ast.FunctionDef, cfg.Graph]
resolver: Resolver
Returns:
ast.AST |
1195 | 1193 | simple_function | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 23 | function | Docstring. |
1196 | 1194 | nested_functions | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 28 | function | Docstring. |
1197 | 1195 | function_with_print | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 37 | function | |
1198 | 1196 | SimpleClass | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 44 | class | |
1199 | 1197 | simple_method | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 46 | method | |
1200 | 1198 | method_with_print | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 49 | method | |
1201 | 1199 | function_with_multiline_call | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 53 | function | Docstring. |
1202 | 1200 | basic_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 61 | function | |
1203 | 1201 | decorated_function | tensorflow/tensorflow/python/autograph/pyct/testing/basic_definitions.py | 67 | function | |
1204 | 1202 | NodeSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 30 | class | |
1205 | 1203 | sample | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 33 | method | |
1206 | 1204 | StatementSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 39 | class | |
1207 | 1205 | ExpressionSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 49 | class | |
1208 | 1206 | CompareSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 58 | class | |
1209 | 1207 | BinaryOpSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 71 | class | |
1210 | 1208 | UnaryOpSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 83 | class | |
1211 | 1209 | NameSampler | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 87 | class | |
1212 | 1210 | CodeGenerator | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 98 | class | Generate random syntactically-valid Python ASTs. |
1213 | 1211 | generate_statement | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 105 | method | Generate a statement node, dispatching to the correct class method. |
1214 | 1212 | sample_node_list | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 124 | method | Generate a list of statements of random length.
Args:
low: Fewest number of statements to generate.
high: Highest number of statements to generate.
generator: Function to call to generate nodes.
Returns:
A list of statements. |
1215 | 1213 | generate_Name | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 140 | method | |
1216 | 1214 | generate_BinOp | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 145 | method | |
1217 | 1215 | generate_Compare | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 151 | method | |
1218 | 1216 | generate_UnaryOp | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 155 | method | |
1219 | 1217 | generate_expression | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 160 | method | |
1220 | 1218 | generate_Assign | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 167 | method | Generate an Assign node. |
1221 | 1219 | generate_If | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 177 | method | Generate an If node. |
1222 | 1220 | generate_While | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 196 | method | Generate a While node. |
1223 | 1221 | generate_Call | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 207 | method | |
1224 | 1222 | generate_Return | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 210 | method | |
1225 | 1223 | generate_Print | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 213 | method | |
1226 | 1224 | generate_FunctionDef | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 216 | method | Generate a FunctionDef node. |
1227 | 1225 | generate_random_functiondef | tensorflow/tensorflow/python/autograph/pyct/testing/codegen.py | 233 | function | |
1228 | 1226 | wrapping_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/decorators.py | 24 | function | |
1229 | 1227 | standalone_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/decorators.py | 33 | function | |
1230 | 1228 | functional_decorator | tensorflow/tensorflow/python/autograph/pyct/testing/decorators.py | 41 | function | |
1231 | 1229 | set_verbosity | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 41 | function | Sets the AutoGraph verbosity level.
_Debug logging in AutoGraph_
More verbose logging is useful to enable when filing bug reports or doing
more in-depth debugging.
There are two means to control the logging verbosity:
* The `set_verbosity` function
* The `AUTOGRAPH_VERBOSITY` environment variable
`set_verbosity` takes precedence over the environment variable.
For example:
```python
import os
import tensorflow as tf
os.environ['AUTOGRAPH_VERBOSITY'] = 5
# Verbosity is now 5
tf.autograph.set_verbosity(0)
# Verbosity is now 0
os.environ['AUTOGRAPH_VERBOSITY'] = 1
# No effect, because set_verbosity was already called.
```
Logs entries are output to [absl](https://abseil.io)'s
[default output](https://abseil.io/docs/python/guides/logging),
with `INFO` level.
Logs can be mirrored to stdout by using the `alsologtostdout` argument.
Mirroring is enabled by default when Python runs in interactive mode.
Args:
level: int, the verbosity level; larger values specify increased verbosity;
0 means no logging. When reporting bugs, it is recommended to set this
value to a larger number, like 10.
alsologtostdout: bool, whether to also output log messages to `sys.stdout`. |
1232 | 1230 | trace | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 92 | function | Traces argument information at compilation time.
`trace` is useful when debugging, and it always executes during the tracing
phase, that is, when the TF graph is constructed.
_Example usage_
```python
import tensorflow as tf
for i in tf.range(10):
tf.autograph.trace(i)
# Output: <Tensor ...>
```
Args:
*args: Arguments to print to `sys.stdout`. |
1233 | 1231 | get_verbosity | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 114 | function | |
1234 | 1232 | has_verbosity | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 121 | function | |
1235 | 1233 | error | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 131 | function | |
1236 | 1234 | log | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 138 | function | |
1237 | 1235 | warn | tensorflow/tensorflow/python/autograph/utils/ag_logging.py | 145 | function | |
1238 | 1236 | BasicRef | tensorflow/tensorflow/python/autograph/utils/compat_util.py | 27 | class | This shim emulates the nonlocal keyword in Py2-compatible source. |
1239 | 1237 | deprecated_py2_support | tensorflow/tensorflow/python/autograph/utils/compat_util.py | 34 | function | Swaps calling module with a Py2-specific implementation. Noop in Py3. |
1240 | 1238 | control_dependency_on_returns | tensorflow/tensorflow/python/autograph/utils/context_managers.py | 27 | function | Create a TF control dependency on the return values of a function.
If the function had no return value, a no-op context is returned.
Args:
return_value: The return value to set as control dependency.
Returns:
A context manager. |
1241 | 1239 | alias_tensors | tensorflow/tensorflow/python/autograph/utils/misc.py | 27 | function | Wraps any Tensor arguments with an identity op.
Any other argument, including Variables, is returned unchanged.
Args:
*args: Any arguments. Must contain at least one element.
Returns:
Same as *args, with Tensor instances replaced as described.
Raises:
ValueError: If args doesn't meet the requirements. |
1242 | 1240 | get_range_len | tensorflow/tensorflow/python/autograph/utils/misc.py | 55 | function | |
1243 | 1241 | MatchDType | tensorflow/tensorflow/python/autograph/utils/py_func.py | 28 | class | Allows matching the dtype of an argument.
Used in conjunction with function calls. For example, MatchDType(0) will
match the DType of the first argument. |
1244 | 1242 | wrap_py_func | tensorflow/tensorflow/python/autograph/utils/py_func.py | 38 | function | Helper that wraps a callable to py_func.
The helper passes tensor arguments through the py_func interface. Non-tensor
arguments are allowed, and will be passed to f directly. Note that non-tensor
arguments are captured by f will not update every time the wrapper is
called (this is consistent with its argument list, which only includes
the tensor arguments). In general, it's safest not to reuse this wrapper.
Args:
f: Callable
return_dtypes: None, individual of tuple/list of DType or MatchDType, the
data type for each of f's return value(s). Set to None if f has no
return values or use_dummy_return is True. Use MatchDType to define a
dtype identical to that of `i`th argument (argument 0 is the first);
an argument must of Tensor type if it is to be used with MatchDType.
args: Positional arguments for f, as list or tuple.
kwargs: Keyword arguments for f, as dict with string keys. May be None.
use_dummy_return: If True, the function will return a dummy value of 1
and discard its actual return value.
Returns:
The return values of f converted to tensor.
Raises:
ValueError: if any of the arguments are incorrect. |
1245 | 1243 | dynamic_list_append | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 26 | function | Converts a list append call inline. |
1246 | 1244 | TensorList | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 43 | class | Tensor list wrapper API-compatible with Python built-in list. |
1247 | 1245 | append | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 51 | method | |
1248 | 1246 | pop | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 54 | method | |
1249 | 1247 | clear | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 58 | method | |
1250 | 1248 | count | tensorflow/tensorflow/python/autograph/utils/tensor_list.py | 61 | method | |
1251 | 1249 | is_dense_tensor | tensorflow/tensorflow/python/autograph/utils/tensors.py | 32 | function | |
1252 | 1250 | is_tensor_array | tensorflow/tensorflow/python/autograph/utils/tensors.py | 38 | function | |
1253 | 1251 | is_tensor_list | tensorflow/tensorflow/python/autograph/utils/tensors.py | 42 | function | |
1254 | 1252 | is_range_tensor | tensorflow/tensorflow/python/autograph/utils/tensors.py | 51 | function | Returns True if a tensor is the result of a tf.range op. Best effort. |
1255 | 1253 | list_local_devices | tensorflow/tensorflow/python/client/device_lib.py | 25 | function | List the available devices available in the local process.
Args:
session_config: a session config proto or None to use the default config.
Returns:
A list of `DeviceAttribute` protocol buffers. |
1256 | 1254 | TF_NewSessionOptions | tensorflow/tensorflow/python/client/pywrap_tf_session.py | 51 | function | |
1257 | 1255 | TF_Reset | tensorflow/tensorflow/python/client/pywrap_tf_session.py | 65 | function | |
1258 | 1256 | SessionInterface | tensorflow/tensorflow/python/client/session.py | 51 | class | Base class for implementations of TensorFlow client sessions. |
1259 | 1257 | graph | tensorflow/tensorflow/python/client/session.py | 55 | method | The underlying TensorFlow graph, to be used in building Operations. |
1260 | 1258 | sess_str | tensorflow/tensorflow/python/client/session.py | 60 | method | The TensorFlow process to which this session will connect. |
1261 | 1259 | run | tensorflow/tensorflow/python/client/session.py | 64 | method | Runs operations in the session. See `BaseSession.run()` for details. |
1262 | 1260 | partial_run_setup | tensorflow/tensorflow/python/client/session.py | 68 | method | Sets up the feeds and fetches for partial runs in the session. |
1263 | 1261 | partial_run | tensorflow/tensorflow/python/client/session.py | 72 | method | Continues the execution with additional feeds and fetches. |
1264 | 1262 | register_session_run_conversion_functions | tensorflow/tensorflow/python/client/session.py | 144 | function | Register fetch and feed conversion functions for `tf.Session.run()`.
This function registers a triple of conversion functions for fetching and/or
feeding values of user-defined types in a call to tf.Session.run().
An example
```python
class SquaredTensor(object):
def __init__(self, tensor):
self.sq = tf.square(tensor)
#you can define conversion functions as follows:
fetch_function = lambda squared_tensor:([squared_tensor.sq],
lambda val: val[0])
feed_function = lambda feed, feed_val: [(feed.sq, feed_val)]
feed_function_for_partial_run = lambda feed: [feed.sq]
#then after invoking this register function, you can use as follows:
session.run(squared_tensor1,
feed_dict = {squared_tensor2 : some_numpy_array})
```
Args:
tensor_type: The type for which you want to register a conversion function.
fetch_function: A callable that takes an object of type `tensor_type` and
returns a tuple, where the first element is a list of `tf.Tensor` objects,
and the second element is a callable that takes a list of ndarrays and
returns an object of some value type that corresponds to `tensor_type`.
fetch_function describes how to expand fetch into its component Tensors
and how to contract the fetched results back into a single return value.
feed_function: A callable that takes feed_key and feed_value as input, and
returns a list of tuples (feed_tensor, feed_val), feed_key must have type
`tensor_type`, and feed_tensor must have type `tf.Tensor`. Each feed
function describes how to unpack a single fed value and map it to feeds of
one or more tensors and their corresponding values.
feed_function_for_partial_run: A callable for specifying tensor values to
feed when setting up a partial run, which takes a `tensor_type` type
object as input, and returns a list of Tensors.
Raises:
ValueError: If `tensor_type` has already been registered. |
1265 | 1263 | BaseSession | tensorflow/tensorflow/python/client/session.py | 627 | class | A class for interacting with a TensorFlow computation.
The BaseSession enables incremental graph building with inline
execution of Operations and evaluation of Tensors. |
1266 | 1264 | list_devices | tensorflow/tensorflow/python/client/session.py | 706 | method | Lists available devices in this session.
```python
devices = sess.list_devices()
for d in devices:
print(d.name)
```
Where:
Each element in the list has the following properties
name: A string with the full name of the device. ex:
`/job:worker/replica:0/task:3/device:CPU:0`
device_type: The type of the device (e.g. `CPU`, `GPU`, `TPU`.)
memory_limit: The maximum amount of memory available on the device.
Note: depending on the device, it is possible the usable memory could
be substantially less.
Raises:
tf.errors.OpError: If it encounters an error (e.g. session is in an
invalid state, or network errors occur).
Returns:
A list of devices in the session. |
1267 | 1265 | close | tensorflow/tensorflow/python/client/session.py | 744 | method | Closes this session.
Calling this method frees all resources associated with the session.
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
closing the TensorFlow session. |
1268 | 1266 | graph | tensorflow/tensorflow/python/client/session.py | 775 | method | The graph that was launched in this session. |
1269 | 1267 | graph_def | tensorflow/tensorflow/python/client/session.py | 780 | method | A serializable version of the underlying TensorFlow graph.
Returns:
A graph_pb2.GraphDef proto containing nodes for all of the Operations in
the underlying TensorFlow graph. |
1270 | 1268 | sess_str | tensorflow/tensorflow/python/client/session.py | 790 | method | |
1271 | 1269 | as_default | tensorflow/tensorflow/python/client/session.py | 793 | method | Returns a context manager that makes this object the default session.
Use with the `with` keyword to specify that calls to
`tf.Operation.run` or `tf.Tensor.eval` should be executed in
this session.
```python
c = tf.constant(..)
sess = tf.compat.v1.Session()
with sess.as_default():
assert tf.compat.v1.get_default_session() is sess
print(c.eval())
```
To get the current default session, use `tf.compat.v1.get_default_session`.
*N.B.* The `as_default` context manager *does not* close the
session when you exit the context, and you must close the session
explicitly.
```python
c = tf.constant(...)
sess = tf.compat.v1.Session()
with sess.as_default():
print(c.eval())
# ...
with sess.as_default():
print(c.eval())
sess.close()
```
Alternatively, you can use `with tf.compat.v1.Session():` to create a
session that is automatically closed on exiting the context,
including when an uncaught exception is raised.
*N.B.* The default session is a property of the current thread. If you
create a new thread, and wish to use the default session in that
thread, you must explicitly add a `with sess.as_default():` in that
thread's function.
*N.B.* Entering a `with sess.as_default():` block does not affect
the current default graph. If you are using multiple graphs, and
`sess.graph` is different from the value of
`tf.compat.v1.get_default_graph`, you must explicitly enter a
`with sess.graph.as_default():` block to make `sess.graph` the default
graph.
Returns:
A context manager using this session as the default session. |
1272 | 1270 | run | tensorflow/tensorflow/python/client/session.py | 848 | method | Runs operations and evaluates tensors in `fetches`.
This method runs one "step" of TensorFlow computation, by
running the necessary graph fragment to execute every `Operation`
and evaluate every `Tensor` in `fetches`, substituting the values in
`feed_dict` for the corresponding input values.
The `fetches` argument may be a single graph element, or an arbitrarily
nested list, tuple, namedtuple, dict, or OrderedDict containing graph
elements at its leaves. A graph element can be one of the following types:
* A `tf.Operation`.
The corresponding fetched value will be `None`.
* A `tf.Tensor`.
The corresponding fetched value will be a numpy ndarray containing the
value of that tensor.
* A `tf.sparse.SparseTensor`.
The corresponding fetched value will be a
`tf.compat.v1.SparseTensorValue`
containing the value of that sparse tensor.
* A `get_tensor_handle` op. The corresponding fetched value will be a
numpy ndarray containing the handle of that tensor.
* A `string` which is the name of a tensor or operation in the graph.
The value returned by `run()` has the same shape as the `fetches` argument,
where the leaves are replaced by the corresponding values returned by
TensorFlow.
Example:
```python
a = tf.constant([10, 20])
b = tf.constant([1.0, 2.0])
# 'fetches' can be a singleton
v = session.run(a)
# v is the numpy array [10, 20]
# 'fetches' can be a list.
v = session.run([a, b])
# v is a Python list with 2 numpy arrays: the 1-D array [10, 20] and the
# 1-D array [1.0, 2.0]
# 'fetches' can be arbitrary lists, tuples, namedtuple, dicts:
MyData = collections.namedtuple('MyData', ['a', 'b'])
v = session.run({'k1': MyData(a, b), 'k2': [b, a]})
# v is a dict with
# v['k1'] is a MyData namedtuple with 'a' (the numpy array [10, 20]) and
# 'b' (the numpy array [1.0, 2.0])
# v['k2'] is a list with the numpy array [1.0, 2.0] and the numpy array
# [10, 20].
```
The optional `feed_dict` argument allows the caller to override
the value of tensors in the graph. Each key in `feed_dict` can be
one of the following types:
* If the key is a `tf.Tensor`, the
value may be a Python scalar, string, list, or numpy ndarray
that can be converted to the same `dtype` as that
tensor. Additionally, if the key is a
`tf.compat.v1.placeholder`, the shape of
the value will be checked for compatibility with the placeholder.
* If the key is a
`tf.sparse.SparseTensor`,
the value should be a
`tf.compat.v1.SparseTensorValue`.
* If the key is a nested tuple of `Tensor`s or `SparseTensor`s, the value
should be a nested tuple with the same structure that maps to their
corresponding values as above.
Each value in `feed_dict` must be convertible to a numpy array of the dtype
of the corresponding key.
The optional `options` argument expects a [`RunOptions`] proto. The options
allow controlling the behavior of this particular step (e.g. turning tracing
on).
The optional `run_metadata` argument expects a [`RunMetadata`] proto. When
appropriate, the non-Tensor output of this step will be collected there. For
example, when users turn on tracing in `options`, the profiled info will be
collected into this argument and passed back.
Args:
fetches: A single graph element, a list of graph elements, or a dictionary
whose values are graph elements or lists of graph elements (described
above).
feed_dict: A dictionary that maps graph elements to values (described
above).
options: A [`RunOptions`] protocol buffer
run_metadata: A [`RunMetadata`] protocol buffer
Returns:
Either a single value if `fetches` is a single graph element, or
a list of values if `fetches` is a list, or a dictionary with the
same keys as `fetches` if that is a dictionary (described above).
Order in which `fetches` operations are evaluated inside the call
is undefined.
Raises:
RuntimeError: If this `Session` is in an invalid state (e.g. has been
closed).
TypeError: If `fetches` or `feed_dict` keys are of an inappropriate type.
ValueError: If `fetches` or `feed_dict` keys are invalid or refer to a
`Tensor` that doesn't exist. |
1273 | 1271 | partial_run | tensorflow/tensorflow/python/client/session.py | 969 | method | Continues the execution with more feeds and fetches.
This is EXPERIMENTAL and subject to change.
To use partial execution, a user first calls `partial_run_setup()` and
then a sequence of `partial_run()`. `partial_run_setup` specifies the
list of feeds and fetches that will be used in the subsequent
`partial_run` calls.
The optional `feed_dict` argument allows the caller to override
the value of tensors in the graph. See run() for more information.
Below is a simple example:
```python
a = array_ops.placeholder(dtypes.float32, shape=[])
b = array_ops.placeholder(dtypes.float32, shape=[])
c = array_ops.placeholder(dtypes.float32, shape=[])
r1 = math_ops.add(a, b)
r2 = math_ops.multiply(r1, c)
h = sess.partial_run_setup([r1, r2], [a, b, c])
res = sess.partial_run(h, r1, feed_dict={a: 1, b: 2})
res = sess.partial_run(h, r2, feed_dict={c: res})
```
Args:
handle: A handle for a sequence of partial runs.
fetches: A single graph element, a list of graph elements, or a dictionary
whose values are graph elements or lists of graph elements (see
documentation for `run`).
feed_dict: A dictionary that maps graph elements to values (described
above).
Returns:
Either a single value if `fetches` is a single graph element, or
a list of values if `fetches` is a list, or a dictionary with the
same keys as `fetches` if that is a dictionary
(see documentation for `run`).
Raises:
tf.errors.OpError: Or one of its subclasses on error. |
1274 | 1272 | partial_run_setup | tensorflow/tensorflow/python/client/session.py | 1016 | method | Sets up a graph with feeds and fetches for partial run.
This is EXPERIMENTAL and subject to change.
Note that contrary to `run`, `feeds` only specifies the graph elements.
The tensors will be supplied by the subsequent `partial_run` calls.
Args:
fetches: A single graph element, or a list of graph elements.
feeds: A single graph element, or a list of graph elements.
Returns:
A handle for partial run.
Raises:
RuntimeError: If this `Session` is in an invalid state (e.g. has been
closed).
TypeError: If `fetches` or `feed_dict` keys are of an inappropriate type.
tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens. |
1275 | 1273 | make_callable | tensorflow/tensorflow/python/client/session.py | 1186 | method | Returns a Python callable that runs a particular step.
The returned callable will take `len(feed_list)` arguments whose types
must be compatible feed values for the respective elements of `feed_list`.
For example, if element `i` of `feed_list` is a `tf.Tensor`, the `i`th
argument to the returned callable must be a numpy ndarray (or something
convertible to an ndarray) with matching element type and shape. See
`tf.Session.run` for details of the allowable feed key and value types.
The returned callable will have the same return type as
`tf.Session.run(fetches, ...)`. For example, if `fetches` is a `tf.Tensor`,
the callable will return a numpy ndarray; if `fetches` is a `tf.Operation`,
it will return `None`.
Args:
fetches: A value or list of values to fetch. See `tf.Session.run` for
details of the allowable fetch types.
feed_list: (Optional.) A list of `feed_dict` keys. See `tf.Session.run`
for details of the allowable feed key types.
accept_options: (Optional.) If `True`, the returned `Callable` will be
able to accept `tf.compat.v1.RunOptions` and `tf.compat.v1.RunMetadata`
as optional keyword arguments `options` and `run_metadata`,
respectively, with the same syntax and semantics as `tf.Session.run`,
which is useful for certain use cases (profiling and debugging) but will
result in measurable slowdown of the `Callable`'s
performance. Default: `False`.
Returns:
A function that when called will execute the step defined by
`feed_list` and `fetches` in this session.
Raises:
TypeError: If `fetches` or `feed_list` cannot be interpreted
as arguments to `tf.Session.run`. |
1276 | 1274 | Session | tensorflow/tensorflow/python/client/session.py | 1509 | class | A class for running TensorFlow operations.
A `Session` object encapsulates the environment in which `Operation`
objects are executed, and `Tensor` objects are evaluated. For
example:
```python
tf.compat.v1.disable_eager_execution() # need to disable eager in TF2.x
# Build a graph.
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session.
sess = tf.compat.v1.Session()
# Evaluate the tensor `c`.
print(sess.run(c)) # prints 30.0
```
A session may own resources, such as
`tf.Variable`, `tf.queue.QueueBase`,
and `tf.compat.v1.ReaderBase`. It is important to release
these resources when they are no longer required. To do this, either
invoke the `tf.Session.close` method on the session, or use
the session as a context manager. The following two examples are
equivalent:
```python
# Using the `close()` method.
sess = tf.compat.v1.Session()
sess.run(...)
sess.close()
# Using the context manager.
with tf.compat.v1.Session() as sess:
sess.run(...)
```
The
[`ConfigProto`](https://www.tensorflow.org/code/tensorflow/core/protobuf/config.proto)
protocol buffer exposes various configuration options for a
session. For example, to create a session that uses soft constraints
for device placement, and log the resulting placement decisions,
create a session as follows:
```python
# Launch the graph in a session that allows soft device placement and
# logs the placement decisions.
sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(
allow_soft_placement=True,
log_device_placement=True))
``` |
1277 | 1275 | reset | tensorflow/tensorflow/python/client/session.py | 1644 | method | Resets resource containers on `target`, and close all connected sessions.
A resource container is distributed across all workers in the
same cluster as `target`. When a resource container on `target`
is reset, resources associated with that container will be cleared.
In particular, all Variables in the container will become undefined:
they lose their values and shapes.
NOTE:
(i) reset() is currently only implemented for distributed sessions.
(ii) Any sessions on the master named by `target` will be closed.
If no resource containers are provided, all containers are reset.
Args:
target: The execution engine to connect to.
containers: A list of resource container name strings, or `None` if all of
all the containers are to be reset.
config: (Optional.) Protocol buffer with configuration options.
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
resetting containers. |
1278 | 1276 | InteractiveSession | tensorflow/tensorflow/python/client/session.py | 1679 | class | A TensorFlow `Session` for use in interactive contexts, such as a shell.
The only difference with a regular `Session` is that an `InteractiveSession`
installs itself as the default session on construction.
The methods `tf.Tensor.eval`
and `tf.Operation.run`
will use that session to run ops.
This is convenient in interactive shells and [IPython
notebooks](http://ipython.org), as it avoids having to pass an explicit
`Session` object to run ops.
For example:
```python
sess = tf.compat.v1.InteractiveSession()
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# We can just use 'c.eval()' without passing 'sess'
print(c.eval())
sess.close()
```
Note that a regular session installs itself as the default session when it
is created in a `with` statement. The common usage in non-interactive
programs is to follow that pattern:
```python
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.compat.v1.Session():
# We can also use 'c.eval()' here.
print(c.eval())
``` |
1279 | 1277 | close | tensorflow/tensorflow/python/client/session.py | 1771 | method | Closes an `InteractiveSession`. |
1280 | 1278 | SessionBenchmark | tensorflow/tensorflow/python/client/session_benchmark.py | 36 | class | Tests and benchmarks for interacting with the `tf.compat.v1.Session`. |
1281 | 1279 | benchmarkGrpcSession | tensorflow/tensorflow/python/client/session_benchmark.py | 178 | method | |
1282 | 1280 | benchmarkDirectSession | tensorflow/tensorflow/python/client/session_benchmark.py | 204 | method | |
1283 | 1281 | AllocationMaximum | tensorflow/tensorflow/python/client/timeline.py | 32 | class | Stores the maximum allocation for a given allocator within the timelne.
Parameters:
timestamp: `tensorflow::Env::NowMicros()` when this maximum was reached.
num_bytes: the total memory used at this time.
tensors: the set of tensors allocated at this time. |
1284 | 1282 | StepStatsAnalysis | tensorflow/tensorflow/python/client/timeline.py | 44 | class | Stores the step stats analysis output.
Parameters:
chrome_trace: A dict containing the chrome trace analysis.
allocator_maximums: A dict mapping allocator names to AllocationMaximum. |
1285 | 1283 | Timeline | tensorflow/tensorflow/python/client/timeline.py | 346 | class | A class for visualizing execution timelines of TensorFlow steps. |
1286 | 1284 | analyze_step_stats | tensorflow/tensorflow/python/client/timeline.py | 674 | method | Analyze the step stats and format it into Chrome Trace Format.
Args:
show_dataflow: (Optional.) If True, add flow events to the trace
connecting producers and consumers of tensors.
show_memory: (Optional.) If True, add object snapshot events to the trace
showing the sizes and lifetimes of tensors.
op_time: (Optional.) How the execution time of op is shown in timeline.
Possible values are "schedule", "gpu" and "all". "schedule" will show op
from the time it is scheduled to the end of the scheduling. Notice by
the end of its scheduling its async kernels may not start yet. It is
shown using the default value from step_stats. "gpu" will show op with
the execution time of its kernels on GPU. "all" will show op from the
start of its scheduling to the end of its last kernel.
Returns:
A 'StepStatsAnalysis' object. |
1287 | 1285 | generate_chrome_trace_format | tensorflow/tensorflow/python/client/timeline.py | 707 | method | Produces a trace in Chrome Trace Format.
Args:
show_dataflow: (Optional.) If True, add flow events to the trace
connecting producers and consumers of tensors.
show_memory: (Optional.) If True, add object snapshot events to the trace
showing the sizes and lifetimes of tensors.
op_time: (Optional.) How the execution time of op is shown in timeline.
Possible values are "schedule", "gpu" and "all".
"schedule" will show op from the time it is scheduled to the end of
the scheduling.
Notice by the end of its scheduling its async kernels may not start
yet. It is shown using the default value from step_stats.
"gpu" will show op with the execution time of its kernels on GPU.
"all" will show op from the start of its scheduling to the end of
its last kernel.
Returns:
A JSON formatted string in Chrome Trace format. |
1288 | 1286 | forward_compatible | tensorflow/tensorflow/python/compat/compat.py | 70 | function | Return true if the forward compatibility window has expired.
See [Version
compatibility](https://tensorflow.org/guide/version_compat#backward_forward).
Forward-compatibility refers to scenarios where the producer of a TensorFlow
model (a GraphDef or SavedModel) is compiled against a version of the
TensorFlow library newer than what the consumer was compiled against. The
"producer" is typically a Python program that constructs and trains a model
while the "consumer" is typically another program that loads and serves the
model.
TensorFlow has been supporting a 3 week forward-compatibility window for
programs compiled from source at HEAD.
For example, consider the case where a new operation `MyNewAwesomeAdd` is
created with the intent of replacing the implementation of an existing Python
wrapper - `tf.add`. The Python wrapper implementation should change from
something like:
```python
def add(inputs, name=None):
return gen_math_ops.add(inputs, name)
```
to:
```python
from tensorflow.python.compat import compat
def add(inputs, name=None):
if compat.forward_compatible(year, month, day):
# Can use the awesome new implementation.
return gen_math_ops.my_new_awesome_add(inputs, name)
# To maintain forward compatibility, use the old implementation.
return gen_math_ops.add(inputs, name)
```
Where `year`, `month`, and `day` specify the date beyond which binaries
that consume a model are expected to have been updated to include the
new operations. This date is typically at least 3 weeks beyond the date
the code that adds the new operation is committed.
Args:
year: A year (e.g., 2018). Must be an `int`.
month: A month (1 <= month <= 12) in year. Must be an `int`.
day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an
`int`.
Returns:
True if the caller can expect that serialized TensorFlow graphs produced
can be consumed by programs that are compiled with the TensorFlow library
source code after (year, month, day). |
1289 | 1287 | forward_compatibility_horizon | tensorflow/tensorflow/python/compat/compat.py | 131 | function | Context manager for testing forward compatibility of generated graphs.
See [Version
compatibility](https://tensorflow.org/guide/version_compat#backward_forward).
To ensure forward compatibility of generated graphs (see `forward_compatible`)
with older binaries, new features can be gated with:
```python
if compat.forward_compatible(year=2018, month=08, date=01):
generate_graph_with_new_features()
else:
generate_graph_so_older_binaries_can_consume_it()
```
However, when adding new features, one may want to unittest it before
the forward compatibility window expires. This context manager enables
such tests. For example:
```python
from tensorflow.python.compat import compat
def testMyNewFeature(self):
with compat.forward_compatibility_horizon(2018, 08, 02):
# Test that generate_graph_with_new_features() has an effect
```
Args:
year: A year (e.g., 2018). Must be an `int`.
month: A month (1 <= month <= 12) in year. Must be an `int`.
day: A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an
`int`.
Yields:
Nothing. |
1290 | 1288 | enable_v2_behavior | tensorflow/tensorflow/python/compat/v2_compat.py | 43 | function | Enables TensorFlow 2.x behaviors.
This function can be called at the beginning of the program (before `Tensors`,
`Graphs` or other structures have been created, and before devices have been
initialized. It switches all global behaviors that are different between
TensorFlow 1.x and 2.x to behave as intended for 2.x.
This function is called in the main TensorFlow `__init__.py` file, user should
not need to call it, except during complex migrations. |
1291 | 1289 | disable_v2_behavior | tensorflow/tensorflow/python/compat/v2_compat.py | 82 | function | Disables TensorFlow 2.x behaviors.
This function can be called at the beginning of the program (before `Tensors`,
`Graphs` or other structures have been created, and before devices have been
initialized. It switches all global behaviors that are different between
TensorFlow 1.x and 2.x to behave as intended for 1.x.
User can call this function to disable 2.x behavior during complex migrations. |
1292 | 1290 | convert_graph_def | tensorflow/tensorflow/python/compiler/mlir/mlir.py | 26 | function | Import a GraphDef and convert it to a textual MLIR module.
Args:
graph_def: An object of type graph_pb2.GraphDef or a textual proto
representation of a valid GraphDef.
pass_pipeline: A textual description of an MLIR Pass Pipeline to run on the
module, see MLIR documentation for the
[textual pass pipeline syntax](https://github.com/tensorflow/mlir/blob/master/g3doc/WritingAPass.md#textual-pass-pipeline-specification).
Returns:
A textual representation of the MLIR module corresponding to the graphdef.
Raises a RuntimeError on error. |
1293 | 1291 | TrtPrecisionMode | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 98 | class | |
1294 | 1292 | supported_precision_modes | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 104 | method | |
1295 | 1293 | TrtConversionParams | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 117 | class | Parameters that are used for TF-TRT conversion.
Fields:
rewriter_config_template: a template RewriterConfig proto used to create a
TRT-enabled RewriterConfig. If None, it will use a default one.
max_workspace_size_bytes: the maximum GPU temporary memory which the TRT
engine can use at execution time. This corresponds to the
'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().
precision_mode: one the strings in
TrtPrecisionMode.supported_precision_modes().
minimum_segment_size: the minimum number of nodes required for a subgraph
to be replaced by TRTEngineOp.
is_dynamic_op: whether to generate dynamic TRT ops which will build the
TRT network and engine at run time. i.e. Since TensorRT version < 6.0
does not support dynamic dimensions other than the batch dimension, when
the TensorFlow graph has a non-batch dimension of dynamic size, we would
need to enable this option. This option should be set to True in TF 2.0.
maximum_cached_engines: max number of cached TRT engines for dynamic TRT
ops. Created TRT engines for a dynamic dimension are cached. This is the
maximum number of engines that can be cached. If the number of cached
engines is already at max but none of them supports the input shapes,
the TRTEngineOp will fall back to run the original TF subgraph that
corresponds to the TRTEngineOp.
use_calibration: this argument is ignored if precision_mode is not INT8.
If set to True, a calibration graph will be created to calibrate the
missing ranges. The calibration graph must be converted to an inference
graph by running calibration with calibrate(). If set to False,
quantization nodes will be expected for every tensor in the graph
(excluding those which will be fused). If a range is missing, an error
will occur. Please note that accuracy may be negatively affected if
there is a mismatch between which tensors TRT quantizes and which
tensors were trained with fake quantization.
max_batch_size: max size for the input batch. This parameter is only
effective when is_dynamic_op=False which is not supported in TF 2.0.
allow_build_at_runtime: whether to build TensorRT engines during runtime.
If no TensorRT engine can be found in cache that can handle the given
inputs during runtime, then a new TensorRT engine is built at runtime if
allow_build_at_runtime=True, and otherwise native TF is used. This
argument is only effective if is_dynamic_op=True. |
1296 | 1294 | get_tensorrt_rewriter_config | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 292 | function | Returns a RewriterConfig proto for TRT transformation.
Args:
conversion_params: a TrtConversionParams instance.
is_v2: whether we're getting a RewriterConfig for TF 2.0.
disable_non_trt_optimizers: Turn off all default Grappler optimizers.
Returns:
A RewriterConfig proto which sets a TensorRTOptimizer to run Grappler.
Raises:
TypeError: if any of the parameters are of unexpected type.
ValueError: if any of the parameters are of unexpected value. |
1297 | 1295 | is_explicit_batch_mode_enabled | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 387 | function | Checks whether explicit batch is enabled by the rewriter config. |
1298 | 1296 | TrtGraphConverter | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 398 | class | A converter for TF-TRT transformation for TF 1.x GraphDef/SavedModels.
To run the conversion without quantization calibration (e.g. for FP32/FP16
precision modes):
```python
converter = TrtGraphConverter(
input_saved_model_dir="my_dir",
precision_mode=TrtPrecisionMode.FP16)
converted_graph_def = converter.convert()
converter.save(output_saved_model_dir)
```
To run the conversion with quantization calibration:
```python
converter = TrtGraphConverter(
input_saved_model_dir="my_dir",
precision_mode=TrtPrecisionMode.INT8)
converter.convert()
# Run calibration 10 times.
converted_graph_def = converter.calibrate(
fetch_names=['output:0'],
num_runs=10,
feed_dict_fn=lambda: {'input:0': my_next_data()})
converter.save(output_saved_model_dir)
``` |
1299 | 1297 | convert | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 643 | method | Run the TF-TRT conversion.
Returns:
The converted GraphDef for TF 1.x. |
1300 | 1298 | calibrate | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 656 | method | Run the calibration and return the calibrated GraphDef.
Args:
fetch_names: a list of output tensor name to fetch during calibration.
num_runs: number of runs of the graph during calibration.
feed_dict_fn: a function that returns a dictionary mapping input names (as
strings) in the GraphDef to be calibrated to values (e.g. Python list,
numpy arrays, etc). One and only one of `feed_dict_fn` and
`input_map_fn` should be specified.
input_map_fn: a function that returns a dictionary mapping input names (as
strings) in the GraphDef to be calibrated to Tensor objects. The values
of the named input tensors in the GraphDef to be calibrated will be
re-mapped to the respective `Tensor` values during calibration. One and
only one of `feed_dict_fn` and `input_map_fn` should be specified.
Raises:
ValueError: if the input combination is invalid.
RuntimeError: if this method is called in eager mode.
Returns:
The GraphDef after the calibration. |
1301 | 1299 | save | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 749 | method | Save the converted graph as a SavedModel.
Args:
output_saved_model_dir: construct a SavedModel using the converted
GraphDef and save it to the specified directory. This option only works
when the input graph is loaded from a SavedModel, i.e. when
input_saved_model_dir is specified and input_graph_def is None in
__init__().
Raises:
ValueError: if the input to the converter is a GraphDef instead of a
SavedModel. |
1302 | 1300 | TrtGraphConverterV2 | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 880 | class | An offline converter for TF-TRT transformation for TF 2.0 SavedModels.
Currently this is not available on Windows platform.
Note that in V2, is_dynamic_op=False is not supported, meaning TRT engines
will be built only when the corresponding TRTEngineOp is executed. But we
still provide a way to avoid the cost of building TRT engines during inference
(see more below).
There are several ways to run the conversion:
1. FP32/FP16 precision
```python
params = tf.experimental.tensorrt.ConversionParams(
precision_mode='FP16')
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
converter.convert()
converter.save(output_saved_model_dir)
```
In this case, no TRT engines will be built or saved in the converted
SavedModel. But if input data is available during conversion, we can still
build and save the TRT engines to reduce the cost during inference (see
option 2 below).
2. FP32/FP16 precision with pre-built engines
```python
params = tf.experimental.tensorrt.ConversionParams(
precision_mode='FP16',
# Set this to a large enough number so it can cache all the engines.
maximum_cached_engines=16)
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
converter.convert()
# Define a generator function that yields input data, and use it to execute
# the graph to build TRT engines.
# With TensorRT 5.1, different engines will be built (and saved later) for
# different input shapes to the TRTEngineOp.
def my_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.build(input_fn=my_input_fn) # Generate corresponding TRT engines
converter.save(output_saved_model_dir) # Generated engines will be saved.
```
In this way, one engine will be built/saved for each unique input shapes of
the TRTEngineOp. This is good for applications that cannot afford building
engines during inference but have access to input data that is similar to
the one used in production (for example, that has the same input shapes).
Also, the generated TRT engines is platform dependent, so we need to run
`build()` in an environment that is similar to production (e.g. with
same type of GPU).
3. INT8 precision and calibration with pre-built engines
```python
params = tf.experimental.tensorrt.ConversionParams(
precision_mode='INT8',
# Currently only one INT8 engine is supported in this mode.
maximum_cached_engines=1,
use_calibration=True)
converter = tf.experimental.tensorrt.Converter(
input_saved_model_dir="my_dir", conversion_params=params)
# Define a generator function that yields input data, and run INT8
# calibration with the data. All input data should have the same shape.
# At the end of convert(), the calibration stats (e.g. range information)
# will be saved and can be used to generate more TRT engines with different
# shapes. Also, one TRT engine will be generated (with the same shape as
# the calibration data) for save later.
def my_calibration_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.convert(calibration_input_fn=my_calibration_input_fn)
# (Optional) Generate more TRT engines offline (same as the previous
# option), to avoid the cost of generating them during inference.
def my_input_fn():
for _ in range(num_runs):
inp1, inp2 = ...
yield inp1, inp2
converter.build(input_fn=my_input_fn)
# Save the TRT engine and the engines.
converter.save(output_saved_model_dir)
``` |
1303 | 1301 | convert | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 1060 | method | Convert the input SavedModel in 2.0 format.
Args:
calibration_input_fn: a generator function that yields input data as a
list or tuple, which will be used to execute the converted signature for
calibration. All the returned input data should have the same shape.
Example: `def input_fn(): yield input1, input2, input3`
Raises:
ValueError: if the input combination is invalid.
Returns:
The TF-TRT converted Function. |
1304 | 1302 | build | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 1126 | method | Run inference with converted graph in order to build TensorRT engines.
Args:
input_fn: a generator function that yields input data as a list or tuple,
which will be used to execute the converted signature to generate TRT
engines. Example:
`def input_fn():
# Let's assume a network with 2 input tensors. We generate 3 sets
# of dummy input data:
input_shapes = [[(1, 16), (2, 16)], # 1st input list
[(2, 32), (4, 32)], # 2nd list of two tensors
[(4, 32), (8, 32)]] # 3rd input list
for shapes in input_shapes:
# return a list of input tensors
yield [np.zeros(x).astype(np.float32) for x in shapes]`
Raises:
NotImplementedError: build() is already called.
RuntimeError: the input_fx is None. |
1305 | 1303 | save | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 1189 | method | Save the converted SavedModel.
Args:
output_saved_model_dir: directory to saved the converted SavedModel. |
1306 | 1304 | create_inference_graph | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert.py | 1270 | function | Python wrapper for the TRT transformation.
Args:
input_graph_def: a GraphDef object containing a model to be transformed. If
set to None, the graph will be read from the SavedModel loaded from
input_saved_model_dir.
outputs: list of tensors or node names for the model outputs. Only used when
input_graph_def is not None.
max_batch_size: max size for the input batch.
max_workspace_size_bytes: the maximum GPU temporary memory which the TRT
engine can use at execution time. This corresponds to the 'workspaceSize'
parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().
precision_mode: one of TrtPrecisionMode.supported_precision_modes().
minimum_segment_size: the minimum number of nodes required for a subgraph to
be replaced by TRTEngineOp.
is_dynamic_op: whether to generate dynamic TRT ops which will build the TRT
network and engine at run time.
maximum_cached_engines: max number of cached TRT engines in dynamic TRT ops.
If the number of cached engines is already at max but none of them can
serve the input, the TRTEngineOp will fall back to run the TF function
based on which the TRTEngineOp is created.
input_saved_model_dir: the directory to load the SavedModel which contains
the input graph to transforms. Used only when input_graph_def is None.
input_saved_model_tags: list of tags to load the SavedModel.
input_saved_model_signature_key: the key of the signature to optimize the
graph for.
output_saved_model_dir: if not None, construct a SavedModel using the
returned GraphDef and save it to the specified directory. This option only
works when the input graph is loaded from a SavedModel, i.e. when
input_saved_model_dir is specified and input_graph_def is None.
session_config: the ConfigProto used to create a Session. It's also used as
a template to create a TRT-enabled ConfigProto for conversion. If not
specified, a default ConfigProto will be used.
Returns:
A GraphDef transformed from input_graph_def (or the SavedModel graph def
loaded from input_saved_model_dir, if input_graph_def is not present), where
all TRT compatible subgraphs are replaced with TRTEngineOps, and a TF
function is added for each of the subgraphs.
If is_dynamic_op is True, each TRTEngineOp will contain a serialized
subgraph GraphDef, which will be converted to a TRT engine at execution time
and the TRT engine will be cached for future usage. A new TRT engine will be
created each time when none of the cached engines match the input shapes. If
it fails to execute the TRT engine or the number of cached engines reaches
maximum_cached_engines, the op will fall back to call the corresponding TF
function.
If is_dynamic_op is False, each TRTEngineOp will contain a serialized TRT
engine created from the corresponding subgraph. No more engines will be
created on the fly, and the op will fall back to call the corresponding TF
function when it fails to execute the engine.
Raises:
ValueError: if the combination of the parameters is invalid. |
1307 | 1305 | TrtPrecisionMode | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py | 31 | class | |
1308 | 1306 | TrtConversionParams | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py | 43 | class | Parameters that are used for TF-TRT conversion.
Fields:
rewriter_config_template: a template RewriterConfig proto used to create a
TRT-enabled RewriterConfig. If None, it will use a default one.
max_workspace_size_bytes: the maximum GPU temporary memory which the TRT
engine can use at execution time. This corresponds to the
'workspaceSize' parameter of nvinfer1::IBuilder::setMaxWorkspaceSize().
precision_mode: one the strings in
TrtPrecisionMode.supported_precision_modes().
minimum_segment_size: the minimum number of nodes required for a subgraph
to be replaced by TRTEngineOp.
is_dynamic_op: whether to generate dynamic TRT ops which will build the
TRT network and engine at run time. i.e. Since TensorRT version < 6.0
does not support dynamic dimensions other than the batch dimension, when
the TensorFlow graph has a non-batch dimension of dynamic size, we would
need to enable this option. This option should be set to True in TF 2.0.
maximum_cached_engines: max number of cached TRT engines for dynamic TRT
ops. Created TRT engines for a dynamic dimension are cached. This is the
maximum number of engines that can be cached. If the number of cached
engines is already at max but none of them supports the input shapes,
the TRTEngineOp will fall back to run the original TF subgraph that
corresponds to the TRTEngineOp.
use_calibration: this argument is ignored if precision_mode is not INT8.
If set to True, a calibration graph will be created to calibrate the
missing ranges. The calibration graph must be converted to an inference
graph by running calibration with calibrate(). If set to False,
quantization nodes will be expected for every tensor in the graph
(exlcuding those which will be fused). If a range is missing, an error
will occur. Please note that accuracy may be negatively affected if
there is a mismatch between which tensors TRT quantizes and which
tensors were trained with fake quantization.
max_batch_size: max size for the input batch. This parameter is only
effective when is_dynamic_op=False which is not supported in TF 2.0. |
1309 | 1307 | TrtConverterWindows | tensorflow/tensorflow/python/compiler/tensorrt/trt_convert_windows.py | 97 | class | An offline converter for TF-TRT transformation for TF 2.0 SavedModels.
Currently this is not available on Windows platform. |
1310 | 1308 | conv2d_layer | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 32 | function | |
1311 | 1309 | div_round_up | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 62 | function | |
1312 | 1310 | build_graph | tensorflow/tensorflow/python/compiler/tensorrt/test/conv2d_test.py | 66 | function | |
1313 | 1311 | CalibrationInt32Support | tensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py | 68 | class | Test execution of calibration with int32 input |
1314 | 1312 | GraphFn | tensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py | 71 | method | |
1315 | 1313 | GetParams | tensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py | 76 | method | |
1316 | 1314 | ExpectedEnginesToBuild | tensorflow/tensorflow/python/compiler/tensorrt/test/int32_test.py | 87 | method | |
1317 | 1315 | IsQuantizationMode | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 95 | function | |
1318 | 1316 | IsQuantizationWithCalibration | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 99 | function | |
1319 | 1317 | GraphState | tensorflow/tensorflow/python/compiler/tensorrt/test/tf_trt_integration_test_base.py | 103 | class | |
1320 | 1318 | GetGraph | tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py | 49 | function | Define graph. |
1321 | 1319 | GenerateModelV2 | tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py | 59 | function | Generate and convert a model using TFv2 API. |
1322 | 1320 | GenerateModelV1 | tensorflow/tensorflow/python/compiler/tensorrt/test/testdata/gen_tftrt_model.py | 90 | function | Generate and convert a model using TFv1 API. |
1323 | 1321 | experimental_jit_scope | tensorflow/tensorflow/python/compiler/xla/jit.py | 42 | function | Enable or disable JIT compilation of operators within the scope.
NOTE: This is an experimental feature.
The compilation is a hint and only supported on a best-effort basis.
Example usage:
```python
with tf.xla.experimental.jit_scope():
c = tf.matmul(a, b) # compiled
with tf.xla.experimental.jit_scope(compile_ops=False):
d = tf.matmul(a, c) # not compiled
with tf.xla.experimental.jit_scope(
compile_ops=lambda node_def: 'matmul' in node_def.op.lower()):
e = tf.matmul(a, b) + d # matmul is compiled, the addition is not.
```
Example of `separate_compiled_gradients`:
```python
# In the example below, the computations for f, g and h will all be compiled
# in separate scopes.
with tf.xla.experimental.jit_scope(
separate_compiled_gradients=True):
f = tf.matmul(a, b)
g = tf.gradients([f], [a, b], name='mygrads1')
h = tf.gradients([f], [a, b], name='mygrads2')
```
Args:
compile_ops: Whether to enable or disable compilation in the scope.
Either a Python bool, or a callable that accepts the parameter
`node_def` and returns a python bool.
separate_compiled_gradients: If true put each gradient subgraph into a
separate compilation scope. This gives fine-grained control over which
portions of the graph will be compiled as a single unit. Compiling
gradients separately may yield better performance for some graphs.
The scope is named based on the scope of the forward computation as well
as the name of the gradients. As a result, the gradients will be compiled
in a scope that is separate from both the forward computation, and from
other gradients.
Raises:
RuntimeError: if called when eager execution is enabled.
Yields:
The current scope, enabling or disabling compilation. |
1324 | 1322 | enable_jit_nonstateful | tensorflow/tensorflow/python/compiler/xla/jit_test.py | 39 | function | |
1325 | 1323 | compile | tensorflow/tensorflow/python/compiler/xla/xla.py | 67 | function | Builds an operator that compiles and runs `computation` with XLA.
NOTE: In eager mode, `computation` will have `@tf.function` semantics.
Args:
computation: A Python function that builds a computation to apply to the
input. If the function takes n inputs, 'inputs' should be a list of n
tensors.
`computation` may return a list of operations and tensors. Tensors must
come before operations in the returned list. The return value of
`compile` is a list of tensors corresponding to the tensors from the
output of `computation`.
All `Operation`s returned from `computation` will be executed when
evaluating any of the returned output tensors.
inputs: A list of inputs or `None` (equivalent to an empty list). Each input
can be a nested structure containing values that are convertible to
tensors. Note that passing an N-dimension list of compatible values will
result in a N-dimension list of scalar tensors rather than a single Rank-N
tensors. If you need different behavior, convert part of inputs to tensors
with `tf.convert_to_tensor`.
Returns:
Same data structure as if computation(*inputs) is called directly with some
exceptions for correctness. Exceptions include:
1) None output: a NoOp would be returned which control-depends on
computation.
2) Single value output: A tuple containing the value would be returned.
3) Operation-only outputs: a NoOp would be returned which
control-depends on computation.
TODO(b/121383831): Investigate into removing these special cases.
Raises:
RuntimeError: if called when eager execution is enabled.
Known issues:
When a tf.random operation is built with XLA, the implementation doesn't
pass the user provided seed to the XLA compiler. As such, the XLA compiler
generates a random number and uses it as a seed when compiling the
operation. This implementation causes a violation of the Tensorflow
defined semantics in two aspects. First, changing the value of the user
defined seed doesn't change the numbers generated by the operation.
Second, when a seed is not specified, running the program multiple times
will generate the same numbers. |
1326 | 1324 | XLACompileContext | tensorflow/tensorflow/python/compiler/xla/xla.py | 125 | class | A `ControlFlowContext` for nodes inside an XLA computation cluster.
THIS IS ONLY FOR TENSORFLOW INTERNAL IMPLEMENTATION, DO NO USE DIRECTLY.
The primary role of `XLACompileContext` is to mark operators inside a
xla.compile() computation with attribute "_xla_compile_id=XYZ", where XYZ is
a unique name.
`ControlFlowContext` is used to perform the annotation since it integrates
with Tensorflow constructs like ResourceVariables. For example, if a
`ResourceVariable` is constructed inside a xla.compile() block, the
`ResourceVariable` implementation can use
`with ops.control_dependencies(None)` to build the variable's definition
outside the compiled computation. |
1327 | 1325 | report_unsupported_operations | tensorflow/tensorflow/python/compiler/xla/xla.py | 159 | method | |
1328 | 1326 | AddOp | tensorflow/tensorflow/python/compiler/xla/xla.py | 195 | method | Create op in XLACompileContext and notifies outer context recursively. |
1329 | 1327 | AddValue | tensorflow/tensorflow/python/compiler/xla/xla.py | 268 | method | Add `val` to the current context and its outer context recursively. |
1330 | 1328 | AddInnerOp | tensorflow/tensorflow/python/compiler/xla/xla.py | 285 | method | |
1331 | 1329 | grad_state | tensorflow/tensorflow/python/compiler/xla/xla.py | 291 | method | |
1332 | 1330 | back_prop | tensorflow/tensorflow/python/compiler/xla/xla.py | 299 | method | Forwards to the enclosing while context, if any. |
1333 | 1331 | is_flat | tensorflow/tensorflow/python/compiler/xla/xla.py | 409 | function | Checks if outputs is a flat structure.
Following structures and values are considered flat:
1) None
2) A single object
3) A list or tuple of Tensors/Operations
The only structures that this function understands are sequences,
dictionaries and types defined using the attrs library. E.g. this means
that if outputs contains a single user-defined Object, it is considered to
be flat. Errors are raised later on if that Object cannot be converted to a
Tensor.
Args:
outputs: Output from `computation` inside `xla.compile`.
Returns:
A boolean indicates whether outputs is flat. |
1334 | 1332 | check_function_argument_count | tensorflow/tensorflow/python/compiler/xla/xla.py | 591 | function | Validate the number of input arguments to an XLA function.
Args:
func: the Python function that will be called to generate the body of an XLA
computation graph.
input_arity: the number of explicit arguments supplied by the caller.
infeed_queue: if not None, the infeed queue that will supply
additional arguments to the function.
Returns:
None if function can be called with the supplied number of
arguments, or an error string if it cannot. |
1335 | 1333 | BatchBenchmark | tensorflow/tensorflow/python/data/benchmarks/batch_benchmark.py | 27 | class | Benchmarks for `tf.data.Dataset.batch()`. |
1336 | 1334 | benchmark_batch_sparse | tensorflow/tensorflow/python/data/benchmarks/batch_benchmark.py | 30 | method | |
1337 | 1335 | benchmark_batch_dense | tensorflow/tensorflow/python/data/benchmarks/batch_benchmark.py | 51 | method | |
1338 | 1336 | DatasetBenchmarkBase | tensorflow/tensorflow/python/data/benchmarks/benchmark_base.py | 31 | class | Base class for dataset benchmarks. |
1339 | 1337 | run_benchmark | tensorflow/tensorflow/python/data/benchmarks/benchmark_base.py | 34 | method | Benchmarks the dataset.
Runs the dataset `iters` times. In each iteration, the benchmark measures
the time it takes to go through `num_elements` elements of the dataset.
Args:
dataset: Dataset to benchmark.
num_elements: Number of dataset elements to iterate through each benchmark
iteration.
iters: Number of times to repeat the timing.
warmup: If true, warms up the session caches by running an untimed run.
apply_default_optimizations: Determines whether default optimizations
should be applied.
Returns:
A float, representing the per-element wall time of the dataset in seconds.
This is the median time (with respect to `iters`) it takes for the dataset
to go through `num_elements` elements, divided by `num_elements.` |
1340 | 1338 | run_and_report_benchmark | tensorflow/tensorflow/python/data/benchmarks/benchmark_base.py | 87 | method | |
1341 | 1339 | FilterBenchmark | tensorflow/tensorflow/python/data/benchmarks/filter_benchmark.py | 26 | class | Benchmarks for `tf.data.Dataset.filter()`. |
1342 | 1340 | benchmark_simple_function | tensorflow/tensorflow/python/data/benchmarks/filter_benchmark.py | 34 | method | |
1343 | 1341 | benchmark_return_component_optimization | tensorflow/tensorflow/python/data/benchmarks/filter_benchmark.py | 37 | method | |
1344 | 1342 | SingleThreadedFlatMapDataset | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 30 | class | A `Dataset` that maps a function over its input and flattens the result. |
1345 | 1343 | element_spec | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 54 | method | |
1346 | 1344 | FromTensorSlicesBenchmark | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 62 | class | Benchmarks for `tf.data.Dataset.from_tensor_slices()`. |
1347 | 1345 | benchmark_slice_repeat_batch | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 65 | method | |
1348 | 1346 | benchmark_reshape_slice_repeat | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 82 | method | |
1349 | 1347 | benchmark_slice_repeat_sparse | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 101 | method | |
1350 | 1348 | benchmark_slice_batch_cache_repeat | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 132 | method | |
1351 | 1349 | make_dataset | tensorflow/tensorflow/python/data/benchmarks/from_tensor_slices_benchmark.py | 116 | method | |
1352 | 1350 | ListFilesBenchmark | tensorflow/tensorflow/python/data/benchmarks/list_files_benchmark.py | 35 | class | Benchmarks for `tf.data.Dataset.list_files()`. |
1353 | 1351 | benchmark_nested_directories | tensorflow/tensorflow/python/data/benchmarks/list_files_benchmark.py | 38 | method | |
1354 | 1352 | MapBenchmark | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 32 | class | Benchmarks for `tf.data.Dataset.map()`. |
1355 | 1353 | benchmark_chain_of_maps | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 35 | method | |
1356 | 1354 | benchmark_map_fan_out | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 53 | method | |
1357 | 1355 | benchmark_stats | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 72 | method | |
1358 | 1356 | benchmark_sequential_control_flow | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 85 | method | |
1359 | 1357 | benchmark_parallel_control_flow | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 103 | method | |
1360 | 1358 | benchmark_helper | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 37 | method | |
1361 | 1359 | benchmark_helper | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 56 | method | |
1362 | 1360 | fn | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 88 | method | |
1363 | 1361 | fn | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 107 | method | |
1364 | 1362 | body | tensorflow/tensorflow/python/data/benchmarks/map_benchmark.py | 91 | method | |
1365 | 1363 | MetaBenchmark | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 31 | class | Benchmark that compares various ways of running tf.data benchmarks. |
1366 | 1364 | setup_fast_dataset | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 37 | method | |
1367 | 1365 | benchmark_fast_dataset_with_only_cpp_iterations | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 44 | method | |
1368 | 1366 | benchmark_fast_dataset_with_session_run | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 48 | method | |
1369 | 1367 | benchmark_fast_dataset_with_session_callable | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 52 | method | |
1370 | 1368 | benchmark_fast_dataset_in_eager | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 56 | method | |
1371 | 1369 | setup_slow_dataset | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 61 | method | |
1372 | 1370 | benchmark_slow_dataset_with_only_cpp_iterations | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 67 | method | |
1373 | 1371 | benchmark_slow_dataset_with_session_run | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 71 | method | |
1374 | 1372 | benchmark_slow_dataset_with_session_callable | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 75 | method | |
1375 | 1373 | benchmark_slow_dataset_in_eager | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 79 | method | |
1376 | 1374 | report | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 84 | method | |
1377 | 1375 | run_benchmark_in_eager | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 105 | method | |
1378 | 1376 | run_benchmark_with_session_run | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 113 | method | |
1379 | 1377 | run_benchmark_with_only_cpp_iterations | tensorflow/tensorflow/python/data/benchmarks/meta_benchmark.py | 132 | method | Benchmarks the dataset with the iterations performed in C++. |
1380 | 1378 | PrefetchBenchmark | tensorflow/tensorflow/python/data/benchmarks/prefetch_benchmark.py | 24 | class | Benchmarks for `tf.data.Dataset.prefetch()`. |
1381 | 1379 | benchmark_prefetch | tensorflow/tensorflow/python/data/benchmarks/prefetch_benchmark.py | 27 | method | |
1382 | 1380 | RangeBenchmark | tensorflow/tensorflow/python/data/benchmarks/range_benchmark.py | 24 | class | Benchmarks for `tf.data.Dataset.range()`. |
1383 | 1381 | benchmark_range | tensorflow/tensorflow/python/data/benchmarks/range_benchmark.py | 27 | method | |
1384 | 1382 | AutotuneBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 31 | class | Benchmarks for autotuning performance knobs. |
1385 | 1383 | benchmark_map | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 66 | method | |
1386 | 1384 | benchmark_map_and_batch | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 87 | method | |
1387 | 1385 | benchmark_interleave | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 110 | method | |
1388 | 1386 | benchmark_map_and_interleave | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 134 | method | |
1389 | 1387 | benchmark_map_batch_and_interleave | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 182 | method | |
1390 | 1388 | f1 | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 152 | method | |
1391 | 1389 | f2 | tensorflow/tensorflow/python/data/experimental/benchmarks/autotune_benchmark.py | 155 | method | |
1392 | 1390 | CsvDatasetBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/csv_dataset_benchmark.py | 38 | class | Benchmarks for `tf.data.experimental.CsvDataset`. |
1393 | 1391 | benchmark_map_with_floats | tensorflow/tensorflow/python/data/experimental/benchmarks/csv_dataset_benchmark.py | 90 | method | |
1394 | 1392 | benchmark_map_with_strings | tensorflow/tensorflow/python/data/experimental/benchmarks/csv_dataset_benchmark.py | 100 | method | |
1395 | 1393 | benchmark_csv_dataset_with_floats | tensorflow/tensorflow/python/data/experimental/benchmarks/csv_dataset_benchmark.py | 110 | method | |
1396 | 1394 | benchmark_csv_dataset_with_strings | tensorflow/tensorflow/python/data/experimental/benchmarks/csv_dataset_benchmark.py | 120 | method | |
1397 | 1395 | MapAndBatchBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py | 40 | class | Benchmarks for `tf.data.experimental.map_and_batch()`. |
1398 | 1396 | benchmark_map_and_batch | tensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py | 43 | method | Measures the performance of parallelized batching. |
1399 | 1397 | benchmark_map_and_batch_chaining_versus_fusing | tensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py | 97 | method | Compares the performance of chaining and fusing map and batch.
NOTE: It is recommended to build the benchmark with
`-c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-gmlt`
and execute it on a machine with at least 32 CPU cores. |
1400 | 1398 | name | tensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py | 116 | method | |
1401 | 1399 | benchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py | 126 | method | Runs benchmark the given series. |
1402 | 1400 | make_dataset | tensorflow/tensorflow/python/data/experimental/benchmarks/map_and_batch_benchmark.py | 129 | method | |
1403 | 1401 | MapDefunBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/map_defun_benchmark.py | 34 | class | Benchmarks for MapDefunOp. |
1404 | 1402 | benchmark_defun_vs_map_fn | tensorflow/tensorflow/python/data/experimental/benchmarks/map_defun_benchmark.py | 52 | method | Benchmarks to compare the performance of MapDefun vs tf.map_fn. |
1405 | 1403 | defun | tensorflow/tensorflow/python/data/experimental/benchmarks/map_defun_benchmark.py | 56 | method | |
1406 | 1404 | fn | tensorflow/tensorflow/python/data/experimental/benchmarks/map_defun_benchmark.py | 59 | method | |
1407 | 1405 | MapVectorizationBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 97 | class | Benchmarks for the `MapVectorization` optimization. |
1408 | 1406 | benchmark_identity | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 147 | method | |
1409 | 1407 | benchmark_add_const | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 151 | method | |
1410 | 1408 | benchmark_return_const | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 154 | method | |
1411 | 1409 | benchmark_select | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 157 | method | |
1412 | 1410 | benchmark_cast | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 160 | method | |
1413 | 1411 | benchmark_reshape | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 164 | method | |
1414 | 1412 | benchmark_decode_csv | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 168 | method | |
1415 | 1413 | benchmark_parse_single_example | tensorflow/tensorflow/python/data/experimental/benchmarks/map_vectorization_benchmark.py | 172 | method | |
1416 | 1414 | MatchingFilesBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/matching_files_benchmark.py | 35 | class | Benchmark for the experimental `MatchingFilesDataset`. |
1417 | 1415 | benchmark_nested_directories | tensorflow/tensorflow/python/data/experimental/benchmarks/matching_files_benchmark.py | 38 | method | |
1418 | 1416 | OptimizationBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/optimize_benchmark.py | 32 | class | Benchmarks for static optimizations. |
1419 | 1417 | benchmark_map_fusion | tensorflow/tensorflow/python/data/experimental/benchmarks/optimize_benchmark.py | 35 | method | Evaluates performance map of fusion. |
1420 | 1418 | benchmark_map_and_filter_fusion | tensorflow/tensorflow/python/data/experimental/benchmarks/optimize_benchmark.py | 76 | method | Evaluates performance map of fusion. |
1421 | 1419 | benchmark_filter_fusion | tensorflow/tensorflow/python/data/experimental/benchmarks/optimize_benchmark.py | 119 | method | |
1422 | 1420 | ParallelInterleaveBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 68 | class | Benchmarks for `tf.data.experimental.parallel_interleave()`. |
1423 | 1421 | apply_interleave | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 71 | method | |
1424 | 1422 | make_dataset | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 89 | method | |
1425 | 1423 | benchmark_remote_file_simulation | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 126 | method | |
1426 | 1424 | benchmark_fast_input | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 135 | method | |
1427 | 1425 | benchmark_single_cycle | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 142 | method | |
1428 | 1426 | benchmark_single_parallel_call | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 152 | method | |
1429 | 1427 | benchmark_long_cycle | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 159 | method | |
1430 | 1428 | benchmark_stats | tensorflow/tensorflow/python/data/experimental/benchmarks/parallel_interleave_benchmark.py | 167 | method | |
1431 | 1429 | RejectionResampleBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/rejection_resample_benchmark.py | 56 | class | Benchmarks for `tf.data.experimental.rejection_resample()`. |
1432 | 1430 | benchmark_resample_performance | tensorflow/tensorflow/python/data/experimental/benchmarks/rejection_resample_benchmark.py | 59 | method | |
1433 | 1431 | SnapshotDatasetBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 34 | class | Benchmarks for `tf.data.experimental.snapshot()`. |
1434 | 1432 | benchmarkWriteSnapshotGzipCompression | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 68 | method | |
1435 | 1433 | benchmarkWriteSnapshotSnappyCompression | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 76 | method | |
1436 | 1434 | benchmarkWriteSnapshotSimple | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 84 | method | |
1437 | 1435 | benchmarkPassthroughSnapshotSimple | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 94 | method | |
1438 | 1436 | benchmarkReadSnapshotSimple | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 104 | method | |
1439 | 1437 | benchmarkReadSnapshotGzipCompression | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 114 | method | |
1440 | 1438 | benchmarkReadSnapshotSnappyCompression | tensorflow/tensorflow/python/data/experimental/benchmarks/snapshot_dataset_benchmark.py | 123 | method | |
1441 | 1439 | UnbatchBenchmark | tensorflow/tensorflow/python/data/experimental/benchmarks/unbatch_benchmark.py | 32 | class | Benchmarks for `tf.data.Dataset.unbatch()`. |
1442 | 1440 | benchmark_native_unbatch | tensorflow/tensorflow/python/data/experimental/benchmarks/unbatch_benchmark.py | 35 | method | |
1443 | 1441 | benchmark_old_unbatch_implementation | tensorflow/tensorflow/python/data/experimental/benchmarks/unbatch_benchmark.py | 72 | method | |
1444 | 1442 | chunk | tensorflow/tensorflow/python/data/experimental/kernel_tests/auto_shard_dataset_test.py | 46 | function | |
1445 | 1443 | remove_variants | tensorflow/tensorflow/python/data/experimental/kernel_tests/serialization/dataset_serialization_test_base.py | 41 | function | Remove variants from a nest structure, so sess.run will execute. |
1446 | 1444 | dense_to_ragged_batch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 36 | function | A transformation that batches ragged elements into `tf.RaggedTensor`s.
This transformation combines multiple consecutive elements of the input
dataset into a single element.
Like `tf.data.Dataset.batch`, the components of the resulting element will
have an additional outer dimension, which will be `batch_size` (or
`N % batch_size` for the last element if `batch_size` does not divide the
number of input elements `N` evenly and `drop_remainder` is `False`). If
your program depends on the batches having the same outer dimension, you
should set the `drop_remainder` argument to `True` to prevent the smaller
batch from being produced.
Unlike `tf.data.Dataset.batch`, the input elements to be batched may have
different shapes:
* If an input element is a `tf.Tensor` whose static `tf.TensorShape` is
fully defined, then it is batched as normal.
* If an input element is a `tf.Tensor` whose static `tf.TensorShape` contains
one or more axes with unknown size (i.e., `shape[i]=None`), then the output
will contain a `tf.RaggedTensor` that is ragged up to any of such
dimensions.
* If an input element is a `tf.RaggedTensor` or any other type, then it is
batched as normal.
Example:
>>> dataset = tf.data.Dataset.from_tensor_slices(np.arange(6))
>>> dataset = dataset.map(lambda x: tf.range(x))
>>> dataset.element_spec.shape
TensorShape([None])
>>> dataset = dataset.apply(
... tf.data.experimental.dense_to_ragged_batch(batch_size=2))
>>> for batch in dataset:
... print(batch)
<tf.RaggedTensor [[], [0]]>
<tf.RaggedTensor [[0, 1], [0, 1, 2]]>
<tf.RaggedTensor [[0, 1, 2, 3], [0, 1, 2, 3, 4]]>
Args:
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in the case it has fewer than
`batch_size` elements; the default behavior is not to drop the smaller
batch.
row_splits_dtype: The dtype that should be used for the `row_splits` of any
new ragged tensors. Existing `tf.RaggedTensor` elements do not have their
row_splits dtype changed.
Returns:
Dataset: A `Dataset`. |
1447 | 1445 | dense_to_sparse_batch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 102 | function | A transformation that batches ragged elements into `tf.sparse.SparseTensor`s.
Like `Dataset.padded_batch()`, this transformation combines multiple
consecutive elements of the dataset, which might have different
shapes, into a single element. The resulting element has three
components (`indices`, `values`, and `dense_shape`), which
comprise a `tf.sparse.SparseTensor` that represents the same data. The
`row_shape` represents the dense shape of each row in the
resulting `tf.sparse.SparseTensor`, to which the effective batch size is
prepended. For example:
```python
# NOTE: The following examples use `{ ... }` to represent the
# contents of a dataset.
a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }
a.apply(tf.data.experimental.dense_to_sparse_batch(
batch_size=2, row_shape=[6])) ==
{
([[0, 0], [0, 1], [0, 2], [1, 0], [1, 1]], # indices
['a', 'b', 'c', 'a', 'b'], # values
[2, 6]), # dense_shape
([[0, 0], [0, 1], [0, 2], [0, 3]],
['a', 'b', 'c', 'd'],
[1, 6])
}
```
Args:
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
row_shape: A `tf.TensorShape` or `tf.int64` vector tensor-like object
representing the equivalent dense shape of a row in the resulting
`tf.sparse.SparseTensor`. Each element of this dataset must have the same
rank as `row_shape`, and must have size less than or equal to `row_shape`
in each dimension.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1448 | 1446 | map_and_batch_with_legacy_function | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 153 | function | Fused implementation of `map` and `batch`.
NOTE: This is an escape hatch for existing uses of `map_and_batch` that do not
work with V2 functions. New uses are strongly discouraged and existing uses
should migrate to `map_and_batch` as this method will not be removed in V2.
Args:
map_func: A function mapping a nested structure of tensors to another
nested structure of tensors.
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`,
representing the number of batches to create in parallel. On one hand,
higher values can help mitigate the effect of stragglers. On the other
hand, higher values can increase contention if CPU is scarce.
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in case its size is smaller than
desired; the default behavior is not to drop the smaller batch.
num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,
representing the number of elements to process in parallel. If not
specified, `batch_size * num_parallel_batches` elements will be processed
in parallel. If the value `tf.data.experimental.AUTOTUNE` is used, then
the number of parallel calls is set dynamically based on available CPU.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: If both `num_parallel_batches` and `num_parallel_calls` are
specified. |
1449 | 1447 | map_and_batch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 213 | function | Fused implementation of `map` and `batch`.
Maps `map_func` across `batch_size` consecutive elements of this dataset
and then combines them into a batch. Functionally, it is equivalent to `map`
followed by `batch`. This API is temporary and deprecated since input pipeline
optimization now fuses consecutive `map` and `batch` operations automatically.
Args:
map_func: A function mapping a nested structure of tensors to another
nested structure of tensors.
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
num_parallel_batches: (Optional.) A `tf.int64` scalar `tf.Tensor`,
representing the number of batches to create in parallel. On one hand,
higher values can help mitigate the effect of stragglers. On the other
hand, higher values can increase contention if CPU is scarce.
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in case its size is smaller than
desired; the default behavior is not to drop the smaller batch.
num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,
representing the number of elements to process in parallel. If not
specified, `batch_size * num_parallel_batches` elements will be processed
in parallel. If the value `tf.data.experimental.AUTOTUNE` is used, then
the number of parallel calls is set dynamically based on available CPU.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: If both `num_parallel_batches` and `num_parallel_calls` are
specified. |
1450 | 1448 | unbatch | tensorflow/tensorflow/python/data/experimental/ops/batching.py | 269 | function | Splits elements of a dataset into multiple elements on the batch dimension.
For example, if elements of the dataset are shaped `[B, a0, a1, ...]`,
where `B` may vary for each input element, then for each element in the
dataset, the unbatched dataset will contain `B` consecutive elements
of shape `[a0, a1, ...]`.
```python
# NOTE: The following example uses `{ ... }` to represent the contents
# of a dataset.
a = { ['a', 'b', 'c'], ['a', 'b'], ['a', 'b', 'c', 'd'] }
a.unbatch() == {
'a', 'b', 'c', 'a', 'b', 'a', 'b', 'c', 'd'}
```
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1451 | 1449 | cardinality | tensorflow/tensorflow/python/data/experimental/ops/cardinality.py | 38 | function | Returns the cardinality of `dataset`, if known.
The operation returns the cardinality of `dataset`. The operation may return
`tf.data.experimental.INFINITE_CARDINALITY` if `dataset` contains an infinite
number of elements or `tf.data.experimental.UNKNOWN_CARDINALITY` if the
analysis fails to determine the number of elements in `dataset` (e.g. when the
dataset source is a file).
>>> dataset = tf.data.Dataset.range(42)
>>> print(tf.data.experimental.cardinality(dataset).numpy())
42
>>> dataset = dataset.repeat()
>>> cardinality = tf.data.experimental.cardinality(dataset)
>>> print((cardinality == tf.data.experimental.INFINITE_CARDINALITY).numpy())
True
>>> dataset = dataset.filter(lambda x: True)
>>> cardinality = tf.data.experimental.cardinality(dataset)
>>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())
True
Args:
dataset: A `tf.data.Dataset` for which to determine cardinality.
Returns:
A scalar `tf.int64` `Tensor` representing the cardinality of `dataset`. If
the cardinality is infinite or unknown, the operation returns the named
constant `INFINITE_CARDINALITY` and `UNKNOWN_CARDINALITY` respectively. |
1452 | 1450 | assert_cardinality | tensorflow/tensorflow/python/data/experimental/ops/cardinality.py | 72 | function | Asserts the cardinality of the input dataset.
NOTE: The following assumes that "examples.tfrecord" contains 42 records.
>>> dataset = tf.data.TFRecordDataset("examples.tfrecord")
>>> cardinality = tf.data.experimental.cardinality(dataset)
>>> print((cardinality == tf.data.experimental.UNKNOWN_CARDINALITY).numpy())
True
>>> dataset = dataset.apply(tf.data.experimental.assert_cardinality(42))
>>> print(tf.data.experimental.cardinality(dataset).numpy())
42
Args:
expected_cardinality: The expected cardinality of the input dataset.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
FailedPreconditionError: The assertion is checked at runtime (when iterating
the dataset) and an error is raised if the actual and expected cardinality
differ. |
1453 | 1451 | compress | tensorflow/tensorflow/python/data/experimental/ops/compression_ops.py | 24 | function | Compress a dataset element.
Args:
element: A nested structure of types supported by Tensorflow.
Returns:
A variant tensor representing the compressed element. This variant can be
passed to `uncompress` to get back the original element. |
1454 | 1452 | uncompress | tensorflow/tensorflow/python/data/experimental/ops/compression_ops.py | 39 | function | Uncompress a compressed dataset element.
Args:
element: A scalar variant tensor to uncompress. The element should have been
created by calling `compress`.
output_spec: A nested structure of `tf.TypeSpec` representing the type(s) of
the uncompressed element.
Returns:
The uncompressed element. |
1455 | 1453 | CounterV2 | tensorflow/tensorflow/python/data/experimental/ops/counter.py | 29 | function | Creates a `Dataset` that counts from `start` in steps of size `step`.
For example:
```python
Dataset.count() == [0, 1, 2, ...)
Dataset.count(2) == [2, 3, ...)
Dataset.count(2, 5) == [2, 7, 12, ...)
Dataset.count(0, -1) == [0, -1, -2, ...)
Dataset.count(10, -1) == [10, 9, ...)
```
Args:
start: (Optional.) The starting value for the counter. Defaults to 0.
step: (Optional.) The step size for the counter. Defaults to 1.
dtype: (Optional.) The data type for counter elements. Defaults to
`tf.int64`.
Returns:
A `Dataset` of scalar `dtype` elements. |
1456 | 1454 | CounterV1 | tensorflow/tensorflow/python/data/experimental/ops/counter.py | 59 | function | |
1457 | 1455 | ProcessingMode | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 36 | class | |
1458 | 1456 | validate | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 40 | method | Raises a ValueError if the given object is not a valid processing mode. |
1459 | 1457 | distribute | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 295 | function | A transformation that moves dataset processing to the tf.data service.
When you iterate over a dataset containing the `distribute` transformation,
the tf.data service creates a "job" which produces data for the dataset
iteration.
The `processing_mode` argument controls what data is produced by a tf.data
service job. Currently, the only supported mode is "parallel_epochs".
processing_mode="parallel_epochs" means that multiple tf.data workers will
iterate through the dataset in parallel, each producing all elements of the
dataset. For example, if the dataset contains {0, 1, 2}, every tf.data worker
used for execution will produce {0, 1, 2}. If there are 3 workers, the job
will produce the elements {0, 0, 0, 1, 1, 1, 2, 2, 2} (though not necessarily
in that order). To account for this, it is recommended to randomly shuffle
your dataset, so that different tf.data workers will iterate through the
dataset in different orders.
In the future, there will be additional processing modes. For example,
a "one_epoch" mode which partitions the dataset across the tf.data
workers, so that the consumers see each element of the dataset only once.
```
dataset = tf.data.Dataset.range(5)
dataset = dataset.map(lambda x: x*x)
dataset = dataset.apply(
tf.data.experimental.service.distribute("parallel_epochs",
"grpc://dataservice:5000"))
dataset = dataset.map(lambda x: x+1)
for element in dataset:
print(element) # prints { 1, 2, 5, 10, 17 }
```
In the above example, the first two lines (before the call to `distribute`)
will be executed on tf.data workers, and the elements provided over
RPC. The remaining transformations (after the call to `distribute`) will be
executed locally.
The `job_name` argument allows jobs to be shared across multiple
datasets. Instead of each dataset creating its own job, all
datasets with the same `job_name` will consume from the same job. A new job
will be created for each iteration of the dataset (with each repetition of
`Dataset.repeat` counting as a new iteration). Suppose two training workers
(in either a single client or multi-client setup) iterate over the below
dataset, and there is a single tf.data worker:
```
range5_dataset = tf.data.Dataset.range(5)
dataset = range5_dataset.apply(tf.data.experimental.service.distribute(
"parallel_epochs", "grpc://dataservice:5000", job_name="my_job_name"))
for iteration in range(3):
print(list(dataset))
```
The elements of each job will be split between the two processes, with
elements being consumed by the processes on a first-come first-served basis.
One possible result is that process 1 prints
```
[0, 2, 4]
[0, 1, 3]
[1]
```
and process 2 prints
```
[1, 3]
[2, 4]
[0, 2, 3, 4]
```
Job names must not be re-used across different training jobs within the
lifetime of the tf.data service. In general, the tf.data service is expected
to live for the duration of a single training job.
To use the tf.data service with multiple training jobs, make sure to use
different job names to avoid conflicts. For example, suppose a training job
calls `distribute` with `job_name="job"` and reads until end of input. If
another independent job connects to the same tf.data service and tries to read
from `job_name="job"`, it will immediately receive end of input, without
getting any data.
**Keras and Distribution Strategies**
The dataset produced by the `distribute` transformation can be passed to
Keras' `Model.fit` or Distribution Strategy's
`tf.distribute.Strategy.experimental_distribute_dataset` like any other
`tf.data.Dataset`. We recommend setting a `job_name` on the call to
`distribute` so that if there are multiple workers, they read data from the
same job. Note that the autosharding normally performed by
`experimental_distribute_dataset` will be disabled when setting a `job_name`,
since sharing the job already results in splitting data across the workers.
When using a shared job, data will be dynamically balanced across workers, so
that they reach end of input about the same time. This results in better
worker utilization than with autosharding, where each worker processes an
independent set of files, and some workers may run out of data earlier than
others.
Args:
processing_mode: A string specifying the policy for how data should be
processed by tf.data workers. Currently, the only supported value is
"parallel_epochs".
service: A string indicating how to connect to the tf.data service. The
string should be in the format "protocol://address", e.g.
"grpc://localhost:5000".
job_name: (Optional.) The name of the job. This argument makes it possible
for multiple datasets to share the same job. The default behavior is that
the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests: (Optional.) A limit on how many elements may be
requested at the same time. You can use this option to control the amount
of memory used, since `distribute` won't use more than `element_size` *
`max_outstanding_requests` of memory.
Returns:
Dataset: A `Dataset` of the elements produced by the data service. |
1460 | 1458 | register_dataset | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 424 | function | Registers a dataset with the tf.data service.
`register_dataset` registers a dataset with the tf.data service so that
datasets can be created later with
`tf.data.experimental.service.from_dataset_id`. This is useful when the
dataset
is registered by one process, then used in another process. When the same
process is both registering and reading from the dataset, it is simpler to use
`tf.data.experimental.service.distribute` instead.
If the dataset is already registered with the tf.data service,
`register_dataset` returns the already-registered dataset's id.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
>>> dispatcher_address = dispatcher.target.split("://")[1]
>>> worker = tf.data.experimental.service.WorkerServer(
... port=0, dispatcher_address=dispatcher_address)
>>> dataset = tf.data.Dataset.range(10)
>>> dataset_id = tf.data.experimental.service.register_dataset(
... dispatcher.target, dataset)
>>> dataset = tf.data.experimental.service.from_dataset_id(
... processing_mode="parallel_epochs",
... service=dispatcher.target,
... dataset_id=dataset_id,
... element_spec=dataset.element_spec)
>>> print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Args:
service: A string indicating how to connect to the tf.data service. The
string should be in the format "protocol://address", e.g.
"grpc://localhost:5000".
dataset: A `tf.data.Dataset` to register with the tf.data service.
Returns:
A scalar int64 tensor of the registered dataset's id. |
1461 | 1459 | from_dataset_id | tensorflow/tensorflow/python/data/experimental/ops/data_service_ops.py | 491 | function | Creates a dataset which reads data from the tf.data service.
This is useful when the dataset is registered by one process, then used in
another process. When the same process is both registering and reading from
the dataset, it is simpler to use `tf.data.experimental.service.distribute`
instead.
Before using `from_dataset_id`, the dataset must have been registered with the
tf.data service using `tf.data.experimental.service.register_dataset`.
`register_dataset` returns a dataset id for the registered dataset. That is
the `dataset_id` which should be passed to `from_dataset_id`.
The `element_spec` argument indicates the `tf.TypeSpec`s for the elements
produced by the dataset. Currently `element_spec` must be explicitly
specified, and match the dataset registered under `dataset_id`. `element_spec`
defaults to `None` so that in the future we can support automatically
discovering the `element_spec` by querying the tf.data service.
`tf.data.experimental.service.distribute` is a convenience method which
combines `register_dataset` and `from_dataset_id` into a dataset
transformation.
See the documentation for `tf.data.experimental.service.distribute` for more
detail about how `from_dataset_id` works.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
>>> dispatcher_address = dispatcher.target.split("://")[1]
>>> worker = tf.data.experimental.service.WorkerServer(
... port=0, dispatcher_address=dispatcher_address)
>>> dataset = tf.data.Dataset.range(10)
>>> dataset_id = tf.data.experimental.service.register_dataset(
... dispatcher.target, dataset)
>>> dataset = tf.data.experimental.service.from_dataset_id(
... processing_mode="parallel_epochs",
... service=dispatcher.target,
... dataset_id=dataset_id,
... element_spec=dataset.element_spec)
>>> print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Args:
processing_mode: A string specifying the policy for how data should be
processed by tf.data workers. Currently, the only supported value is
"parallel_epochs".
service: A string indicating how to connect to the tf.data service. The
string should be in the format "protocol://address", e.g.
"grpc://localhost:5000".
dataset_id: The id of the dataset to read from. This id is returned by
`register_dataset` when the dataset is registered with the tf.data
service.
element_spec: A nested structure of `tf.TypeSpec`s representing the type of
elements produced by the dataset. Use `tf.data.Dataset.element_spec` to
see the element spec for a given dataset.
job_name: (Optional.) The name of the job. This argument makes it possible
for multiple datasets to share the same job. The default behavior is that
the dataset creates anonymous, exclusively owned jobs.
max_outstanding_requests: (Optional.) A limit on how many elements may be
requested at the same time. You can use this option to control the amount
of memory used, since `distribute` won't use more than `element_size` *
`max_outstanding_requests` of memory.
Returns:
A `tf.data.Dataset` which reads from the tf.data service. |
1462 | 1460 | replicate | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 294 | function | A transformation that replicates `dataset` onto a list of devices.
Args:
dataset: A `tf.data.Dataset` object.
devices: A list of devices to replicate the dataset on.
Returns:
A dictionary mapping device name to a dataset on that device. |
1463 | 1461 | batch_sizes_for_worker | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 328 | function | Determines how to rebatch a dataset for the given worker.
Given the global batch size, number of workers, number of replicas per worker,
and worker index, returns the correct batch sizes for rebatching a dataset
on worker `worker_index` of `num_workers`, such that each global step (across
all workers and replicas) will consume global_batch_size elements. The
returned value should be passed as the `batch_sizes` input parameter to
`tf.data.experimental.rebatch()`. The returned batch sizes meet the following
constraints:
Let G = global_batch_size, W = num_workers, R = num_replicas_per_worker
(A) for any worker, len(batch_sizes) = W * R
(B) for any worker, sum(batch_sizes) == G
(C) for any global step (i.e. R iterations on each worker), the sum of batches
consumed by replicas across all workers is G.
(D) any two batch sizes of any two replicas differs by at most one.
For example, suppose we have G = 7, W = 2, R = 2, and suppose we have two
files which each contain 7 elements:
```python
# WORKER 0
batch_sizes_0 = batch_sizes_for_worker(global_batch_size=global_batch_size,
num_workers=2,
num_replicas_per_worker=2,
worker_index=0)
print(batch_sizes_0)
>> [2, 2, 2, 1]
dataset_0 = tf.data.Dataset.from_tensor_slices(["file_a", "file_b"])
dataset_0 = dataset_0.shard(num_shards, index=0)
dataset_0 = dataset_0.batch(7)
dataset_0 = dataset_0.apply(tf.data.experimental.rebatch(batch_sizes_0))
for elem in dataset_0:
print(elem)
>> [[A0, A1], [A2, A3], [A4, A5], [A6]]
# WORKER 1
batch_sizes_1 = batch_sizes_for_worker(global_batch_size=global_batch_size,
num_workers=2,
num_replicas_per_worker=2,
worker_index=1)
print(batch_sizes_1)
>> [2, 1, 2, 2]
dataset_1 = tf.data.Dataset.from_tensor_slices(["file_a", "file_b"])
dataset_1 = dataset_1.shard(num_shards, index=1)
dataset_1 = dataset_1.batch(7)
dataset_1 = dataset_1.apply(tf.data.experimental.rebatch(batch_sizes_1))
for elem in dataset_1:
print(elem)
>> [[B0, B1], [B2], [B3, B4], [B5, B6]]
```
The above example will produce the following elements:
Step 1:
Worker 0 Replica 0: [A0, A1]
Worker 0 Replica 1: [A2, A3]
Worker 1 Replica 0: [B0, B1]
Worker 1 Replica 1: [B2]
Total batch size = 7
Step 2:
Worker 0 Replica 0: [A4, A5]
Worker 0 Replica 1: [A6]
Worker 1 Replica 0: [B3, B4]
Worker 1 Replica 1: [B5, B6]
Total batch size = 7
Args:
global_batch_size: A `tf.int64` scalar, representing the global batch size.
num_workers: An integer representing the number of workers the dataset will
be distributed across.
num_replicas_per_worker: An integer representing the number of replicas per
worker. All workers are assumed to have the same number of replicas.
worker_index: An integer index of the worker to be rebatched.
Returns:
A `tf.int64` vector, representing the batch sizes to rebatch the dataset
into. |
1464 | 1462 | compute_batch_size | tensorflow/tensorflow/python/data/experimental/ops/distribute.py | 436 | function | An operation that returns the batch size of the dataset.
This op tries to infer the batch size statically by walking up the dataset
tree from the final dataset node and returning the batch size of the first
batching dataset (such as from .batch() and .padded_batch()) that it
encounters. This differs from using the `element_spec` of a dataset in that it
does not account for partial batches.
This operation may fail if it encounters contradictory batch sizes (for
example, if the dataset is created by zipping together two datasets with
different batch sizes), if there are no explicit batching transformations, or
if there are operations downstream from the batching transformation that may
modify its batch size. In these cases, it returns a -1.
Args:
dataset: A `tf.data.Dataset` object.
Returns:
A `tf.int64` Tensor representing the batch size of the dataset sans partial
batches. If this cannot be inferred statically, the value of this tensor
will be -1. |
1465 | 1463 | AutoShardPolicy | tensorflow/tensorflow/python/data/experimental/ops/distribute_options.py | 27 | class | Represents the type of auto-sharding we enable.
Please see the DistributeOptions.auto_shard_policy documentation for more
information on each type of autosharding. |
1466 | 1464 | ExternalStatePolicy | tensorflow/tensorflow/python/data/experimental/ops/distribute_options.py | 39 | class | |
1467 | 1465 | DistributeOptions | tensorflow/tensorflow/python/data/experimental/ops/distribute_options.py | 46 | class | Represents options for distributed data processing.
You can set the distribution options of a dataset through the
`experimental_distribute` property of `tf.data.Options`; the property is
an instance of `tf.data.experimental.DistributeOptions`.
```python
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = AutoShardPolicy.OFF
dataset = dataset.with_options(options)
``` |
1468 | 1466 | enumerate_dataset | tensorflow/tensorflow/python/data/experimental/ops/enumerate_ops.py | 26 | function | A transformation that enumerates the elements of a dataset.
It is similar to python's `enumerate`.
For example:
```python
# NOTE: The following examples use `{ ... }` to represent the
# contents of a dataset.
a = { 1, 2, 3 }
b = { (7, 8), (9, 10) }
# The nested structure of the `datasets` argument determines the
# structure of elements in the resulting dataset.
a.apply(tf.data.experimental.enumerate_dataset(start=5))
=> { (5, 1), (6, 2), (7, 3) }
b.apply(tf.data.experimental.enumerate_dataset())
=> { (0, (7, 8)), (1, (9, 10)) }
```
Args:
start: A `tf.int64` scalar `tf.Tensor`, representing the start value for
enumeration.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1469 | 1467 | ignore_errors | tensorflow/tensorflow/python/data/experimental/ops/error_ops.py | 26 | function | Creates a `Dataset` from another `Dataset` and silently ignores any errors.
Use this transformation to produce a dataset that contains the same elements
as the input, but silently drops any elements that caused an error. For
example:
```python
dataset = tf.data.Dataset.from_tensor_slices([1., 2., 0., 4.])
# Computing `tf.debugging.check_numerics(1. / 0.)` will raise an
InvalidArgumentError.
dataset = dataset.map(lambda x: tf.debugging.check_numerics(1. / x, "error"))
# Using `ignore_errors()` will drop the element that causes an error.
dataset =
dataset.apply(tf.data.experimental.ignore_errors()) # ==> {1., 0.5, 0.2}
```
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1470 | 1468 | get_single_element | tensorflow/tensorflow/python/data/experimental/ops/get_single_element.py | 27 | function | Returns the single element in `dataset` as a nested structure of tensors.
This function enables you to use a `tf.data.Dataset` in a stateless
"tensor-in tensor-out" expression, without creating an iterator.
This can be useful when your preprocessing transformations are expressed
as a `Dataset`, and you want to use the transformation at serving time.
For example:
```python
def preprocessing_fn(input_str):
# ...
return image, label
input_batch = ... # input batch of BATCH_SIZE elements
dataset = (tf.data.Dataset.from_tensor_slices(input_batch)
.map(preprocessing_fn, num_parallel_calls=BATCH_SIZE)
.batch(BATCH_SIZE))
image_batch, label_batch = tf.data.experimental.get_single_element(dataset)
```
Args:
dataset: A `tf.data.Dataset` object containing a single element.
Returns:
A nested structure of `tf.Tensor` objects, corresponding to the single
element of `dataset`.
Raises:
TypeError: if `dataset` is not a `tf.data.Dataset` object.
InvalidArgumentError (at runtime): if `dataset` does not contain exactly
one element. |
1471 | 1469 | group_by_reducer | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 38 | function | A transformation that groups elements and performs a reduction.
This transformation maps element of a dataset to a key using `key_func` and
groups the elements by key. The `reducer` is used to process each group; its
`init_func` is used to initialize state for each group when it is created, the
`reduce_func` is used to update the state every time an element is mapped to
the matching group, and the `finalize_func` is used to map the final state to
an output value.
Args:
key_func: A function mapping a nested structure of tensors
(having shapes and types defined by `self.output_shapes` and
`self.output_types`) to a scalar `tf.int64` tensor.
reducer: An instance of `Reducer`, which captures the reduction logic using
the `init_func`, `reduce_func`, and `finalize_func` functions.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1472 | 1470 | group_by_window | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 68 | function | A transformation that groups windows of elements by key and reduces them.
This transformation maps each consecutive element in a dataset to a key
using `key_func` and groups the elements by key. It then applies
`reduce_func` to at most `window_size_func(key)` elements matching the same
key. All except the final window for each key will contain
`window_size_func(key)` elements; the final window may be smaller.
You may provide either a constant `window_size` or a window size determined by
the key through `window_size_func`.
Args:
key_func: A function mapping a nested structure of tensors
(having shapes and types defined by `self.output_shapes` and
`self.output_types`) to a scalar `tf.int64` tensor.
reduce_func: A function mapping a key and a dataset of up to `window_size`
consecutive elements matching that key to another dataset.
window_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements matching the same key to combine in a single
batch, which will be passed to `reduce_func`. Mutually exclusive with
`window_size_func`.
window_size_func: A function mapping a key to a `tf.int64` scalar
`tf.Tensor`, representing the number of consecutive elements matching
the same key to combine in a single batch, which will be passed to
`reduce_func`. Mutually exclusive with `window_size`.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: if neither or both of {`window_size`, `window_size_func`} are
passed. |
1473 | 1471 | bucket_by_sequence_length | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 128 | function | A transformation that buckets elements in a `Dataset` by length.
Elements of the `Dataset` are grouped together by length and then are padded
and batched.
This is useful for sequence tasks in which the elements have variable length.
Grouping together elements that have similar lengths reduces the total
fraction of padding in a batch which increases training step efficiency.
Args:
element_length_func: function from element in `Dataset` to `tf.int32`,
determines the length of the element, which will determine the bucket it
goes into.
bucket_boundaries: `list<int>`, upper length boundaries of the buckets.
bucket_batch_sizes: `list<int>`, batch size per bucket. Length should be
`len(bucket_boundaries) + 1`.
padded_shapes: Nested structure of `tf.TensorShape` to pass to
`tf.data.Dataset.padded_batch`. If not provided, will use
`dataset.output_shapes`, which will result in variable length dimensions
being padded out to the maximum length in each batch.
padding_values: Values to pad with, passed to
`tf.data.Dataset.padded_batch`. Defaults to padding with 0.
pad_to_bucket_boundary: bool, if `False`, will pad dimensions with unknown
size to maximum length in batch. If `True`, will pad dimensions with
unknown size to bucket boundary minus 1 (i.e., the maximum length in each
bucket), and caller must ensure that the source `Dataset` does not contain
any elements with length longer than `max(bucket_boundaries)`.
no_padding: `bool`, indicates whether to pad the batch features (features
need to be either of type `tf.sparse.SparseTensor` or of same shape).
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in the case it has fewer than
`batch_size` elements; the default behavior is not to drop the smaller
batch.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: if `len(bucket_batch_sizes) != len(bucket_boundaries) + 1`. |
1474 | 1472 | Reducer | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 443 | class | A reducer is used for reducing a set of elements.
A reducer is represented as a tuple of the three functions:
1) initialization function: key => initial state
2) reduce function: (old state, input) => new state
3) finalization function: state => result |
1475 | 1473 | init_func | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 458 | method | |
1476 | 1474 | reduce_func | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 462 | method | |
1477 | 1475 | finalize_func | tensorflow/tensorflow/python/data/experimental/ops/grouping.py | 466 | method | |
1478 | 1476 | parallel_interleave | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 43 | function | A parallel version of the `Dataset.interleave()` transformation.
`parallel_interleave()` maps `map_func` across its input to produce nested
datasets, and outputs their elements interleaved. Unlike
`tf.data.Dataset.interleave`, it gets elements from `cycle_length` nested
datasets in parallel, which increases the throughput, especially in the
presence of stragglers. Furthermore, the `sloppy` argument can be used to
improve performance, by relaxing the requirement that the outputs are produced
in a deterministic order, and allowing the implementation to skip over nested
datasets whose elements are not readily available when requested.
Example usage:
```python
# Preprocess 4 files concurrently.
filenames = tf.data.Dataset.list_files("/path/to/data/train*.tfrecords")
dataset = filenames.apply(
tf.data.experimental.parallel_interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=4))
```
WARNING: If `sloppy` is `True`, the order of produced elements is not
deterministic.
Args:
map_func: A function mapping a nested structure of tensors to a `Dataset`.
cycle_length: The number of input `Dataset`s to interleave from in parallel.
block_length: The number of consecutive elements to pull from an input
`Dataset` before advancing to the next input `Dataset`.
sloppy: A boolean controlling whether determinism should be traded for
performance by allowing elements to be produced out of order. If
`sloppy` is `None`, the `tf.data.Options.experimental_deterministic`
dataset option (`True` by default) is used to decide whether to enforce a
deterministic order.
buffer_output_elements: The number of elements each iterator being
interleaved should buffer (similar to the `.prefetch()` transformation for
each interleaved iterator).
prefetch_input_elements: The number of input elements to transform to
iterators before they are needed for interleaving.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1479 | 1477 | sample_from_datasets_v2 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 146 | function | Samples elements at random from the datasets in `datasets`.
Args:
datasets: A list of `tf.data.Dataset` objects with compatible structure.
weights: (Optional.) A list of `len(datasets)` floating-point values where
`weights[i]` represents the probability with which an element should be
sampled from `datasets[i]`, or a `tf.data.Dataset` object where each
element is such a list. Defaults to a uniform distribution across
`datasets`.
seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
random seed that will be used to create the distribution. See
`tf.random.set_seed` for behavior.
Returns:
A dataset that interleaves elements from `datasets` at random, according to
`weights` if provided, otherwise with uniform probability.
Raises:
TypeError: If the `datasets` or `weights` arguments have the wrong type.
ValueError: If the `weights` argument is specified and does not match the
length of the `datasets` element. |
1480 | 1478 | sample_from_datasets_v1 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 230 | function | |
1481 | 1479 | choose_from_datasets_v2 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 237 | function | Creates a dataset that deterministically chooses elements from `datasets`.
For example, given the following datasets:
```python
datasets = [tf.data.Dataset.from_tensors("foo").repeat(),
tf.data.Dataset.from_tensors("bar").repeat(),
tf.data.Dataset.from_tensors("baz").repeat()]
# Define a dataset containing `[0, 1, 2, 0, 1, 2, 0, 1, 2]`.
choice_dataset = tf.data.Dataset.range(3).repeat(3)
result = tf.data.experimental.choose_from_datasets(datasets, choice_dataset)
```
The elements of `result` will be:
```
"foo", "bar", "baz", "foo", "bar", "baz", "foo", "bar", "baz"
```
Args:
datasets: A list of `tf.data.Dataset` objects with compatible structure.
choice_dataset: A `tf.data.Dataset` of scalar `tf.int64` tensors between
`0` and `len(datasets) - 1`.
Returns:
A dataset that interleaves elements from `datasets` according to the values
of `choice_dataset`.
Raises:
TypeError: If the `datasets` or `choice_dataset` arguments have the wrong
type. |
1482 | 1480 | choose_from_datasets_v1 | tensorflow/tensorflow/python/data/experimental/ops/interleave_ops.py | 280 | function | |
1483 | 1481 | save | tensorflow/tensorflow/python/data/experimental/ops/io.py | 34 | function | Saves the content of the given dataset.
Example usage:
>>> import tempfile
>>> path = os.path.join(tempfile.gettempdir(), "saved_data")
>>> # Save a dataset
>>> dataset = tf.data.Dataset.range(2)
>>> tf.data.experimental.save(dataset, path)
>>> new_dataset = tf.data.experimental.load(path,
... tf.TensorSpec(shape=(), dtype=tf.int64))
>>> for elem in new_dataset:
... print(elem)
tf.Tensor(0, shape=(), dtype=int64)
tf.Tensor(1, shape=(), dtype=int64)
The saved dataset is saved in multiple file "shards". By default, the dataset
output is divided to shards in a round-robin fashion but custom sharding can
be specified via the `shard_func` function. For example, you can save the
dataset to using a single shard as follows:
```python
dataset = make_dataset()
def custom_shard_func(element):
return 0
dataset = tf.data.experimental.save(
path="/path/to/data", ..., shard_func=custom_shard_func)
```
NOTE: The directory layout and file format used for saving the dataset is
considered an implementation detail and may change. For this reason, datasets
saved through `tf.data.experimental.save` should only be consumed through
`tf.data.experimental.load`, which is guaranteed to be backwards compatible.
Args:
dataset: The dataset to save.
path: Required. A directory to use for saving the dataset.
compression: Optional. The algorithm to use to compress data when writing
it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`.
shard_func: Optional. A function to control the mapping of dataset elements
to file shards. The function is expected to map elements of the input
dataset to int64 shard IDs. If present, the function will be traced and
executed as graph computation. |
1484 | 1482 | load | tensorflow/tensorflow/python/data/experimental/ops/io.py | 146 | function | Loads a previously saved dataset.
Example usage:
>>> import tempfile
>>> path = os.path.join(tempfile.gettempdir(), "saved_data")
>>> # Save a dataset
>>> dataset = tf.data.Dataset.range(2)
>>> tf.data.experimental.save(dataset, path)
>>> new_dataset = tf.data.experimental.load(path,
... tf.TensorSpec(shape=(), dtype=tf.int64))
>>> for elem in new_dataset:
... print(elem)
tf.Tensor(0, shape=(), dtype=int64)
tf.Tensor(1, shape=(), dtype=int64)
Note that to load a previously saved dataset, you need to specify
`element_spec` -- a type signature of the elements of the saved dataset, which
can be obtained via `tf.data.Dataset.element_spec`. This requirement exists so
that shape inference of the loaded dataset does not need to perform I/O.
If the default option of sharding the saved dataset was used, the element
order of the saved dataset will be preserved when loading it.
The `reader_func` argument can be used to specify a custom order in which
elements should be loaded from the individual shards. The `reader_func` is
expected to take a single argument -- a dataset of datasets, each containing
elements of one of the shards -- and return a dataset of elements. For
example, the order of shards can be shuffled when loading them as follows:
```python
def custom_reader_func(datasets):
datasets = datasets.shuffle(NUM_SHARDS)
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = tf.data.experimental.load(
path="/path/to/data", ..., reader_func=custom_reader_func)
```
Args:
path: Required. A path pointing to a previously saved dataset.
element_spec: Required. A nested structure of `tf.TypeSpec` objects matching
the structure of an element of the saved dataset and specifying the type
of individual element components.
compression: Optional. The algorithm to use to decompress the data when
reading it. Supported options are `GZIP` and `NONE`. Defaults to `NONE`.
reader_func: Optional. A function to control how to read data from shards.
If present, the function will be traced and executed as graph computation.
Returns:
A `tf.data.Dataset` instance. |
1485 | 1483 | make_saveable_from_iterator | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 49 | function | Returns a SaveableObject for saving/restoring iterator state using Saver.
Args:
iterator: Iterator.
external_state_policy: A string that identifies how to handle input
pipelines that depend on external state. Possible values are
'ignore': The external state is silently ignored.
'warn': The external state is ignored, logging a warning.
'fail': The operation fails upon encountering external state.
By default we set it to 'fail'.
Returns:
A SaveableObject for saving/restoring iterator state using Saver.
Raises:
ValueError: If iterator does not support checkpointing.
ValueError: If `external_state_policy` is not one of 'warn', 'ignore' or
'fail'.
For example:
```python
with tf.Graph().as_default():
ds = tf.data.Dataset.range(10)
iterator = ds.make_initializable_iterator()
# Build the iterator SaveableObject.
saveable_obj = tf.data.experimental.make_saveable_from_iterator(iterator)
# Add the SaveableObject to the SAVEABLE_OBJECTS collection so
# it can be automatically saved using Saver.
tf.compat.v1.add_to_collection(tf.GraphKeys.SAVEABLE_OBJECTS, saveable_obj)
saver = tf.compat.v1.train.Saver()
while continue_training:
... Perform training ...
if should_save_checkpoint:
saver.save()
```
Note: When restoring the iterator, the existing iterator state is completely
discarded. This means that any changes you may have made to the Dataset
graph will be discarded as well! This includes the new Dataset graph
that you may have built during validation. So, while running validation,
make sure to run the initializer for the validation input pipeline after
restoring the checkpoint.
Note: Not all iterators support checkpointing yet. Attempting to save the
state of an unsupported iterator will throw an error. |
1486 | 1484 | CheckpointInputPipelineHook | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 106 | class | Checkpoints input pipeline state every N steps or seconds.
This hook saves the state of the iterators in the `Graph` so that when
training is resumed the input pipeline continues from where it left off.
This could potentially avoid overfitting in certain pipelines where the
number of training steps per eval are small compared to the dataset
size or if the training pipeline is pre-empted.
Differences from `CheckpointSaverHook`:
1. Saves only the input pipelines in the "iterators" collection and not the
global variables or other saveable objects.
2. Does not write the `GraphDef` and `MetaGraphDef` to the summary.
Example of checkpointing the training pipeline:
```python
est = tf.estimator.Estimator(model_fn)
while True:
est.train(
train_input_fn,
hooks=[tf.data.experimental.CheckpointInputPipelineHook(est)],
steps=train_steps_per_eval)
# Note: We do not pass the hook here.
metrics = est.evaluate(eval_input_fn)
if should_stop_the_training(metrics):
break
```
This hook should be used if the input pipeline state needs to be saved
separate from the model checkpoint. Doing so may be useful for a few reasons:
1. The input pipeline checkpoint may be large, if there are large shuffle
or prefetch buffers for instance, and may bloat the checkpoint size.
2. If the input pipeline is shared between training and validation, restoring
the checkpoint during validation may override the validation input
pipeline.
For saving the input pipeline checkpoint alongside the model weights use
`tf.data.experimental.make_saveable_from_iterator` directly to create a
`SaveableObject` and add to the `SAVEABLE_OBJECTS` collection. Note, however,
that you will need to be careful not to restore the training iterator during
eval. You can do that by not adding the iterator to the SAVEABLE_OBJECTS
collector when building the eval graph. |
1487 | 1485 | begin | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 227 | method | |
1488 | 1486 | after_create_session | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 244 | method | |
1489 | 1487 | before_run | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 284 | method | |
1490 | 1488 | after_run | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 290 | method | |
1491 | 1489 | end | tensorflow/tensorflow/python/data/experimental/ops/iterator_ops.py | 293 | method | |
1492 | 1490 | map_defun | tensorflow/tensorflow/python/data/experimental/ops/map_defun.py | 26 | function | Map a function on the list of tensors unpacked from `elems` on dimension 0.
Args:
fn: A function (`function.defun`) that takes a list of tensors and returns
another list of tensors. The output list has the same types as
output_dtypes. The elements of the output list have the same dimension 0
as `elems`, and the remaining dimensions correspond to those of
`fn_output_shapes`.
elems: A list of tensors.
output_dtypes: A list of dtypes corresponding to the output types of the
function.
output_shapes: A list of `TensorShape`s corresponding to the output shapes
from each invocation of the function on slices of inputs.
max_intra_op_parallelism: An integer. If positive, sets the max parallelism
limit of each function call to this.
Raises:
ValueError: if any of the inputs are malformed.
Returns:
A list of `Tensor` objects with the same types as `output_dtypes`. |
1493 | 1491 | MatchingFilesDataset | tensorflow/tensorflow/python/data/experimental/ops/matching_files.py | 28 | class | A `Dataset` that list the files according to the input patterns. |
1494 | 1492 | element_spec | tensorflow/tensorflow/python/data/experimental/ops/matching_files.py | 38 | method | |
1495 | 1493 | model | tensorflow/tensorflow/python/data/experimental/ops/optimization.py | 24 | function | A transformation that models performance.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1496 | 1494 | optimize | tensorflow/tensorflow/python/data/experimental/ops/optimization.py | 39 | function | A transformation that applies optimizations.
Args:
optimizations: (Optional.) A `tf.string` vector `tf.Tensor` identifying
optimizations to use. If not specified, the default set of optimizations
is applied.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1497 | 1495 | MapVectorizationOptions | tensorflow/tensorflow/python/data/experimental/ops/optimization_options.py | 36 | class | Represents options for the MapVectorization optimization. |
1498 | 1496 | OptimizationOptions | tensorflow/tensorflow/python/data/experimental/ops/optimization_options.py | 70 | class | Represents options for dataset optimizations.
You can set the optimization options of a dataset through the
`experimental_optimization` property of `tf.data.Options`; the property is
an instance of `tf.data.experimental.OptimizationOptions`.
```python
options = tf.data.Options()
options.experimental_optimization.noop_elimination = True
options.experimental_optimization.map_vectorization.enabled = True
options.experimental_optimization.apply_default_optimizations = False
dataset = dataset.with_options(options)
``` |
1499 | 1497 | parse_example_dataset | tensorflow/tensorflow/python/data/experimental/ops/parsing_ops.py | 110 | function | A transformation that parses `Example` protos into a `dict` of tensors.
Parses a number of serialized `Example` protos given in `serialized`. We refer
to `serialized` as a batch with `batch_size` many entries of individual
`Example` protos.
This op parses serialized examples into a dictionary mapping keys to `Tensor`,
`SparseTensor`, and `RaggedTensor` objects. `features` is a dict from keys to
`VarLenFeature`, `RaggedFeature`, `SparseFeature`, and `FixedLenFeature`
objects. Each `VarLenFeature` and `SparseFeature` is mapped to a
`SparseTensor`; each `RaggedFeature` is mapped to a `RaggedTensor`; and each
`FixedLenFeature` is mapped to a `Tensor`. See `tf.io.parse_example` for more
details about feature dictionaries.
Args:
features: A `dict` mapping feature keys to `FixedLenFeature`,
`VarLenFeature`, `RaggedFeature`, and `SparseFeature` values.
num_parallel_calls: (Optional.) A `tf.int32` scalar `tf.Tensor`,
representing the number of parsing processes to call in parallel.
deterministic: (Optional.) A boolean controlling whether determinism
should be traded for performance by allowing elements to be produced out
of order if some parsing calls complete faster than others. If
`deterministic` is `None`, the
`tf.data.Options.experimental_deterministic` dataset option (`True` by
default) is used to decide whether to produce elements
deterministically.
Returns:
A dataset transformation function, which can be passed to
`tf.data.Dataset.apply`.
Raises:
ValueError: if features argument is None. |
1500 | 1498 | prefetch_to_device | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 37 | function | A transformation that prefetches dataset values to the given `device`.
NOTE: Although the transformation creates a `tf.data.Dataset`, the
transformation must be the final `Dataset` in the input pipeline.
Args:
device: A string. The name of a device to which elements will be prefetched.
buffer_size: (Optional.) The number of elements to buffer on `device`.
Defaults to an automatically chosen value.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1501 | 1499 | copy_to_device | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 60 | function | A transformation that copies dataset elements to the given `target_device`.
Args:
target_device: The name of a device to which elements will be copied.
source_device: The original device on which `input_dataset` will be placed.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1502 | 1500 | map_on_gpu | tensorflow/tensorflow/python/data/experimental/ops/prefetching_ops.py | 260 | function | Maps `map_func` across the elements of this dataset.
NOTE: This is a highly experimental version of `tf.data.Dataset.map` that runs
`map_func` on GPU. It must be used after applying the
`tf.data.experimental.copy_to_device` transformation with a GPU device
argument.
Args:
map_func: A function mapping a nested structure of tensors (having shapes
and types defined by `self.output_shapes` and `self.output_types`) to
another nested structure of tensors.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1503 | 1501 | RandomDatasetV2 | tensorflow/tensorflow/python/data/experimental/ops/random_ops.py | 32 | class | A `Dataset` of pseudorandom values. |
1504 | 1502 | element_spec | tensorflow/tensorflow/python/data/experimental/ops/random_ops.py | 43 | method | |
1505 | 1503 | RandomDatasetV1 | tensorflow/tensorflow/python/data/experimental/ops/random_ops.py | 48 | class | A `Dataset` of pseudorandom values. |
1506 | 1504 | make_tf_record_dataset | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 223 | function | Reads and optionally parses TFRecord files into a dataset.
Provides common functionality such as batching, optional parsing, shuffling,
and performant defaults.
Args:
file_pattern: List of files or patterns of TFRecord file paths.
See `tf.io.gfile.glob` for pattern rules.
batch_size: An int representing the number of records to combine
in a single batch.
parser_fn: (Optional.) A function accepting string input to parse
and process the record contents. This function must map records
to components of a fixed shape, so they may be batched. By
default, uses the record contents unmodified.
num_epochs: (Optional.) An int specifying the number of times this
dataset is repeated. If None (the default), cycles through the
dataset forever.
shuffle: (Optional.) A bool that indicates whether the input
should be shuffled. Defaults to `True`.
shuffle_buffer_size: (Optional.) Buffer size to use for
shuffling. A large buffer size ensures better shuffling, but
increases memory usage and startup time.
shuffle_seed: (Optional.) Randomization seed to use for shuffling.
prefetch_buffer_size: (Optional.) An int specifying the number of
feature batches to prefetch for performance improvement.
Defaults to auto-tune. Set to 0 to disable prefetching.
num_parallel_reads: (Optional.) Number of threads used to read
records from files. By default or if set to a value >1, the
results will be interleaved. Defaults to `24`.
num_parallel_parser_calls: (Optional.) Number of parallel
records to parse in parallel. Defaults to `batch_size`.
drop_final_batch: (Optional.) Whether the last batch should be
dropped in case its size is smaller than `batch_size`; the
default behavior is not to drop the smaller batch.
Returns:
A dataset, where each element matches the output of `parser_fn`
except it will have an additional leading `batch-size` dimension,
or a `batch_size`-length 1-D tensor of strings if `parser_fn` is
unspecified. |
1507 | 1505 | make_csv_dataset_v2 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 322 | function | Reads CSV files into a dataset.
Reads CSV files into a dataset, where each element is a (features, labels)
tuple that corresponds to a batch of CSV rows. The features dictionary
maps feature column names to `Tensor`s containing the corresponding
feature data, and labels is a `Tensor` containing the batch's label data.
Args:
file_pattern: List of files or patterns of file paths containing CSV
records. See `tf.io.gfile.glob` for pattern rules.
batch_size: An int representing the number of records to combine
in a single batch.
column_names: An optional list of strings that corresponds to the CSV
columns, in order. One per column of the input record. If this is not
provided, infers the column names from the first row of the records.
These names will be the keys of the features dict of each dataset element.
column_defaults: A optional list of default values for the CSV fields. One
item per selected column of the input record. Each item in the list is
either a valid CSV dtype (float32, float64, int32, int64, or string), or a
`Tensor` with one of the aforementioned types. The tensor can either be
a scalar default value (if the column is optional), or an empty tensor (if
the column is required). If a dtype is provided instead of a tensor, the
column is also treated as required. If this list is not provided, tries
to infer types based on reading the first num_rows_for_inference rows of
files specified, and assumes all columns are optional, defaulting to `0`
for numeric values and `""` for string values. If both this and
`select_columns` are specified, these must have the same lengths, and
`column_defaults` is assumed to be sorted in order of increasing column
index.
label_name: A optional string corresponding to the label column. If
provided, the data for this column is returned as a separate `Tensor` from
the features dictionary, so that the dataset complies with the format
expected by a `tf.Estimator.train` or `tf.Estimator.evaluate` input
function.
select_columns: An optional list of integer indices or string column
names, that specifies a subset of columns of CSV data to select. If
column names are provided, these must correspond to names provided in
`column_names` or inferred from the file header lines. When this argument
is specified, only a subset of CSV columns will be parsed and returned,
corresponding to the columns specified. Using this results in faster
parsing and lower memory usage. If both this and `column_defaults` are
specified, these must have the same lengths, and `column_defaults` is
assumed to be sorted in order of increasing column index.
field_delim: An optional `string`. Defaults to `","`. Char delimiter to
separate fields in a record.
use_quote_delim: An optional bool. Defaults to `True`. If false, treats
double quotation marks as regular characters inside of the string fields.
na_value: Additional string to recognize as NA/NaN.
header: A bool that indicates whether the first rows of provided CSV files
correspond to header lines with column names, and should not be included
in the data.
num_epochs: An int specifying the number of times this dataset is repeated.
If None, cycles through the dataset forever.
shuffle: A bool that indicates whether the input should be shuffled.
shuffle_buffer_size: Buffer size to use for shuffling. A large buffer size
ensures better shuffling, but increases memory usage and startup time.
shuffle_seed: Randomization seed to use for shuffling.
prefetch_buffer_size: An int specifying the number of feature
batches to prefetch for performance improvement. Recommended value is the
number of batches consumed per training step. Defaults to auto-tune.
num_parallel_reads: Number of threads used to read CSV records from files.
If >1, the results will be interleaved. Defaults to `1`.
sloppy: If `True`, reading performance will be improved at
the cost of non-deterministic ordering. If `False`, the order of elements
produced is deterministic prior to shuffling (elements are still
randomized if `shuffle=True`. Note that if the seed is set, then order
of elements after shuffling is deterministic). Defaults to `False`.
num_rows_for_inference: Number of rows of a file to use for type inference
if record_defaults is not provided. If None, reads all the rows of all
the files. Defaults to 100.
compression_type: (Optional.) A `tf.string` scalar evaluating to one of
`""` (no compression), `"ZLIB"`, or `"GZIP"`. Defaults to no compression.
ignore_errors: (Optional.) If `True`, ignores errors with CSV file parsing,
such as malformed data or empty lines, and moves on to the next valid
CSV record. Otherwise, the dataset raises an error and stops processing
when encountering any invalid records. Defaults to `False`.
Returns:
A dataset, where each element is a (features, labels) tuple that corresponds
to a batch of `batch_size` CSV rows. The features dictionary maps feature
column names to `Tensor`s containing the corresponding column data, and
labels is a `Tensor` containing the column data for the label column
specified by `label_name`.
Raises:
ValueError: If any of the arguments is malformed. |
1508 | 1506 | make_csv_dataset_v1 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 569 | function | |
1509 | 1507 | CsvDatasetV2 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 604 | class | A Dataset comprising lines from one or more CSV files. |
1510 | 1508 | element_spec | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 778 | method | |
1511 | 1509 | CsvDatasetV1 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 783 | class | A Dataset comprising lines from one or more CSV files. |
1512 | 1510 | make_batched_features_dataset_v2 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 874 | function | Returns a `Dataset` of feature dictionaries from `Example` protos.
If label_key argument is provided, returns a `Dataset` of tuple
comprising of feature dictionaries and label.
Example:
```
serialized_examples = [
features {
feature { key: "age" value { int64_list { value: [ 0 ] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
feature { key: "kws" value { bytes_list { value: [ "code", "art" ] } } }
},
features {
feature { key: "age" value { int64_list { value: [] } } }
feature { key: "gender" value { bytes_list { value: [ "f" ] } } }
feature { key: "kws" value { bytes_list { value: [ "sports" ] } } }
}
]
```
We can use arguments:
```
features: {
"age": FixedLenFeature([], dtype=tf.int64, default_value=-1),
"gender": FixedLenFeature([], dtype=tf.string),
"kws": VarLenFeature(dtype=tf.string),
}
```
And the expected output is:
```python
{
"age": [[0], [-1]],
"gender": [["f"], ["f"]],
"kws": SparseTensor(
indices=[[0, 0], [0, 1], [1, 0]],
values=["code", "art", "sports"]
dense_shape=[2, 2]),
}
```
Args:
file_pattern: List of files or patterns of file paths containing
`Example` records. See `tf.io.gfile.glob` for pattern rules.
batch_size: An int representing the number of records to combine
in a single batch.
features: A `dict` mapping feature keys to `FixedLenFeature` or
`VarLenFeature` values. See `tf.io.parse_example`.
reader: A function or class that can be
called with a `filenames` tensor and (optional) `reader_args` and returns
a `Dataset` of `Example` tensors. Defaults to `tf.data.TFRecordDataset`.
label_key: (Optional) A string corresponding to the key labels are stored in
`tf.Examples`. If provided, it must be one of the `features` key,
otherwise results in `ValueError`.
reader_args: Additional arguments to pass to the reader class.
num_epochs: Integer specifying the number of times to read through the
dataset. If None, cycles through the dataset forever. Defaults to `None`.
shuffle: A boolean, indicates whether the input should be shuffled. Defaults
to `True`.
shuffle_buffer_size: Buffer size of the ShuffleDataset. A large capacity
ensures better shuffling but would increase memory usage and startup time.
shuffle_seed: Randomization seed to use for shuffling.
prefetch_buffer_size: Number of feature batches to prefetch in order to
improve performance. Recommended value is the number of batches consumed
per training step. Defaults to auto-tune.
reader_num_threads: Number of threads used to read `Example` records. If >1,
the results will be interleaved. Defaults to `1`.
parser_num_threads: Number of threads to use for parsing `Example` tensors
into a dictionary of `Feature` tensors. Defaults to `2`.
sloppy_ordering: If `True`, reading performance will be improved at
the cost of non-deterministic ordering. If `False`, the order of elements
produced is deterministic prior to shuffling (elements are still
randomized if `shuffle=True`. Note that if the seed is set, then order
of elements after shuffling is deterministic). Defaults to `False`.
drop_final_batch: If `True`, and the batch size does not evenly divide the
input dataset size, the final smaller batch will be dropped. Defaults to
`False`.
Returns:
A dataset of `dict` elements, (or a tuple of `dict` elements and label).
Each `dict` maps feature keys to `Tensor` or `SparseTensor` objects.
Raises:
TypeError: If `reader` is of the wrong type.
ValueError: If `label_key` is not one of the `features` keys. |
1513 | 1511 | make_batched_features_dataset_v1 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 1058 | function | |
1514 | 1512 | SqlDatasetV2 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 1114 | class | A `Dataset` consisting of the results from a SQL query. |
1515 | 1513 | element_spec | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 1155 | method | |
1516 | 1514 | SqlDatasetV1 | tensorflow/tensorflow/python/data/experimental/ops/readers.py | 1160 | class | A `Dataset` consisting of the results from a SQL query. |
1517 | 1515 | rejection_resample | tensorflow/tensorflow/python/data/experimental/ops/resampling.py | 37 | function | A transformation that resamples a dataset to achieve a target distribution.
**NOTE** Resampling is performed via rejection sampling; some fraction
of the input values will be dropped.
Args:
class_func: A function mapping an element of the input dataset to a scalar
`tf.int32` tensor. Values should be in `[0, num_classes)`.
target_dist: A floating point type tensor, shaped `[num_classes]`.
initial_dist: (Optional.) A floating point type tensor, shaped
`[num_classes]`. If not provided, the true class distribution is
estimated live in a streaming fashion.
seed: (Optional.) Python integer seed for the resampler.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1518 | 1516 | scan | tensorflow/tensorflow/python/data/experimental/ops/scan_ops.py | 158 | function | A transformation that scans a function across an input dataset.
This transformation is a stateful relative of `tf.data.Dataset.map`.
In addition to mapping `scan_func` across the elements of the input dataset,
`scan()` accumulates one or more state tensors, whose initial values are
`initial_state`.
Args:
initial_state: A nested structure of tensors, representing the initial state
of the accumulator.
scan_func: A function that maps `(old_state, input_element)` to
`(new_state, output_element)`. It must take two arguments and return a
pair of nested structures of tensors. The `new_state` must match the
structure of `initial_state`.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1519 | 1517 | shuffle_and_repeat | tensorflow/tensorflow/python/data/experimental/ops/shuffle_ops.py | 60 | function | Shuffles and repeats a Dataset, reshuffling with each repetition.
>>> d = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> d = d.apply(tf.data.experimental.shuffle_and_repeat(2, count=2))
>>> [elem.numpy() for elem in d] # doctest: +SKIP
[2, 3, 1, 1, 3, 2]
```python
dataset.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size, count, seed))
```
produces the same output as
```python
dataset.shuffle(
buffer_size, seed=seed, reshuffle_each_iteration=True).repeat(count)
```
In each repetition, this dataset fills a buffer with `buffer_size` elements,
then randomly samples elements from this buffer, replacing the selected
elements with new elements. For perfect shuffling, set the buffer size equal
to the full size of the dataset.
For instance, if your dataset contains 10,000 elements but `buffer_size` is
set to 1,000, then `shuffle` will initially select a random element from
only the first 1,000 elements in the buffer. Once an element is selected,
its space in the buffer is replaced by the next (i.e. 1,001-st) element,
maintaining the 1,000 element buffer.
Args:
buffer_size: A `tf.int64` scalar `tf.Tensor`, representing the maximum
number elements that will be buffered when prefetching.
count: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the number
of times the dataset should be repeated. The default behavior (if `count`
is `None` or `-1`) is for the dataset be repeated indefinitely.
seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random
seed that will be used to create the distribution. See
`tf.random.set_seed` for behavior.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1520 | 1518 | sleep | tensorflow/tensorflow/python/data/experimental/ops/sleep.py | 37 | function | Sleeps for `sleep_microseconds` before producing each input element.
Args:
sleep_microseconds: The number of microseconds to sleep before producing an
input element.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1521 | 1519 | legacy_snapshot | tensorflow/tensorflow/python/data/experimental/ops/snapshot.py | 108 | function | Writes to/reads from a snapshot of a dataset.
This function attempts to determine whether a valid snapshot exists at the
`path`, and reads from the snapshot if so. If not, it will run the
preprocessing pipeline as usual, and write out a snapshot of the data
processed for future use.
Args:
path: A directory where we want to save our snapshots and/or read from a
previously saved snapshot.
compression: The type of compression to apply to the Dataset. Currently
supports "GZIP" or None. Defaults to None (no compression).
reader_path_prefix: A prefix to add to the path when reading from snapshots.
Defaults to None.
writer_path_prefix: A prefix to add to the path when writing to snapshots.
Defaults to None.
shard_size_bytes: The size of each shard to be written by the snapshot
dataset op. Defaults to 10 GiB.
pending_snapshot_expiry_seconds: How long to wait (in seconds) before the
snapshot op considers a previously unfinished snapshot to be stale.
num_reader_threads: Number of threads to parallelize reading from snapshot.
Especially useful if compression is turned on since the decompression
operation tends to be intensive. Defaults to 1. If > 1, then this might
introduce non-determinism i.e. the order in which the elements are read
from the snapshot are different from the order they're written.
reader_buffer_size: Maximum number of elements we can prefetch reading from
the snapshot. Defaults to 1. Increasing this might improve performance but
will increase memory consumption.
num_writer_threads: Number of threads to parallelize writing from snapshot.
We'll open up `num_writer_threads` files and write to them in parallel.
Especially useful if compression is turned on since the compression
operation tends to be intensive. Defaults to 1. If > 1, then this might
introduce non-determinism i.e. the order in which the elements are read
from the upstream iterator are different from the order they're written.
writer_buffer_size: Maximum number of pipeline elements to fill up the
buffer before writing them out using `num_writer_threads`.
shuffle_on_read: If this is True, then the order in which examples are
produced when reading from a snapshot will be random. Defaults to False.
shuffle_seed: Optional. If shuffle_seed is set, the random number generator
used for shuffling (when shuffle_on_read is turned on) is seeded by the
given seed. Otherwise, it is seeded by a random seed that differs for
every run.
mode: The mode at which snapshot should operate. Valid options are "auto",
"read", "write", and "passthrough". The default mode is "auto", where the
snapshot op will automatically determine what mode to operate in.
snapshot_name: If set, use the supplied string as a named snapshot name
instead of introspecting the data pipeline and automatically generating a
unique identifier for the snapshot.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1522 | 1520 | snapshot | tensorflow/tensorflow/python/data/experimental/ops/snapshot.py | 258 | function | API to persist the output of the input dataset.
The snapshot API allows users to transparently persist the output of their
preprocessing pipeline to disk, and materialize the pre-processed data on a
different training run.
This API enables repeated preprocessing steps to be consolidated, and allows
re-use of already processed data, trading off disk storage and network
bandwidth for freeing up more valuable CPU resources and accelerator compute
time.
https://github.com/tensorflow/community/blob/master/rfcs/20200107-tf-data-snapshot.md
has detailed design documentation of this feature.
Users can specify various options to control the behavior of snapshot,
including how snapshots are read from and written to by passing in
user-defined functions to the `reader_func` and `shard_func` parameters.
`shard_func` is a user specified function that maps input elements to snapshot
shards.
Users may want to specify this function to control how snapshot files should
be written to disk. Below is an example of how a potential shard_func could
be written.
```python
dataset = ...
dataset = dataset.enumerate()
dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir",
shard_func=lambda x, y: x % NUM_SHARDS, ...))
dataset = dataset.map(lambda x, y: y)
```
`reader_func` is a user specified function that accepts a single argument:
(1) a Dataset of Datasets, each representing a "split" of elements of the
original dataset. The cardinality of the input dataset matches the
number of the shards specified in the `shard_func` (see above). The function
should return a Dataset of elements of the original dataset.
Users may want specify this function to control how snapshot files should be
read from disk, including the amount of shuffling and parallelism.
Here is an example of a standard reader function a user can define. This
function enables both dataset shuffling and parallel reading of datasets:
```python
def user_reader_func(datasets):
# shuffle the datasets splits
datasets = datasets.shuffle(NUM_CORES)
# read datasets in parallel and interleave their elements
return datasets.interleave(lambda x: x, num_parallel_calls=AUTOTUNE)
dataset = dataset.apply(tf.data.experimental.snapshot("/path/to/snapshot/dir",
reader_func=user_reader_func))
```
By default, snapshot parallelizes reads by the number of cores available on
the system, but will not attempt to shuffle the data.
Args:
path: Required. A directory to use for storing / loading the snapshot to /
from.
compression: Optional. The type of compression to apply to the snapshot
written to disk. Supported options are `GZIP`, `SNAPPY`, `AUTO` or None.
Defaults to AUTO, which attempts to pick an appropriate compression
algorithm for the dataset.
reader_func: Optional. A function to control how to read data from snapshot
shards.
shard_func: Optional. A function to control how to shard data when writing a
snapshot.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1523 | 1521 | StatsAggregatorV2 | tensorflow/tensorflow/python/data/experimental/ops/stats_aggregator.py | 31 | class | A stateful resource that aggregates statistics from one or more iterators.
To record statistics, use one of the custom transformation functions defined
in this module when defining your `tf.data.Dataset`. All statistics will be
aggregated by the `StatsAggregator` that is associated with a particular
iterator (see below). For example, to record the latency of producing each
element by iterating over a dataset:
```python
dataset = ...
dataset = dataset.apply(tf.data.experimental.latency_stats("total_bytes"))
```
To associate a `StatsAggregator` with a `tf.data.Dataset` object, use
the following pattern:
```python
aggregator = tf.data.experimental.StatsAggregator()
dataset = ...
# Apply `StatsOptions` to associate `dataset` with `aggregator`.
options = tf.data.Options()
options.experimental_stats.aggregator = aggregator
dataset = dataset.with_options(options)
```
Note: This interface is experimental and expected to change. In particular,
we expect to add other implementations of `StatsAggregator` that provide
different ways of exporting statistics, and add more types of statistics. |
1524 | 1522 | StatsAggregatorV1 | tensorflow/tensorflow/python/data/experimental/ops/stats_aggregator.py | 82 | class | A stateful resource that aggregates statistics from one or more iterators.
To record statistics, use one of the custom transformation functions defined
in this module when defining your `tf.data.Dataset`. All statistics will be
aggregated by the `StatsAggregator` that is associated with a particular
iterator (see below). For example, to record the latency of producing each
element by iterating over a dataset:
```python
dataset = ...
dataset = dataset.apply(tf.data.experimental.latency_stats("total_bytes"))
```
To associate a `StatsAggregator` with a `tf.data.Dataset` object, use
the following pattern:
```python
aggregator = tf.data.experimental.StatsAggregator()
dataset = ...
# Apply `StatsOptions` to associate `dataset` with `aggregator`.
options = tf.data.Options()
options.experimental_stats.aggregator = aggregator
dataset = dataset.with_options(options)
```
To get a protocol buffer summary of the currently aggregated statistics,
use the `StatsAggregator.get_summary()` tensor. The easiest way to do this
is to add the returned tensor to the `tf.GraphKeys.SUMMARIES` collection,
so that the summaries will be included with any existing summaries.
```python
aggregator = tf.data.experimental.StatsAggregator()
# ...
stats_summary = aggregator.get_summary()
tf.compat.v1.add_to_collection(tf.GraphKeys.SUMMARIES, stats_summary)
```
Note: This interface is experimental and expected to change. In particular,
we expect to add other implementations of `StatsAggregator` that provide
different ways of exporting statistics, and add more types of statistics. |
1525 | 1523 | get_summary | tensorflow/tensorflow/python/data/experimental/ops/stats_aggregator.py | 130 | method | Returns a string `tf.Tensor` that summarizes the aggregated statistics.
The returned tensor will contain a serialized `tf.compat.v1.summary.Summary`
protocol
buffer, which can be used with the standard TensorBoard logging facilities.
Returns:
A scalar string `tf.Tensor` that summarizes the aggregated statistics. |
1526 | 1524 | set_stats_aggregator | tensorflow/tensorflow/python/data/experimental/ops/stats_ops.py | 29 | function | Set the given `stats_aggregator` for aggregating the input dataset stats.
Args:
stats_aggregator: A `tf.data.experimental.StatsAggregator` object.
prefix: (Optional) String, all statistics recorded for the input `dataset`
will have given `prefix` prepend with the name.
counter_prefix: (Optional) String, all statistics recorded as `counters`
will have the given `prefix` for the counter. Defaults to "/tensorflow".
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1527 | 1525 | bytes_produced_stats | tensorflow/tensorflow/python/data/experimental/ops/stats_ops.py | 52 | function | Records the number of bytes produced by each element of the input dataset.
To consume the statistics, associate a `StatsAggregator` with the output
dataset.
Args:
tag: String. All statistics recorded by the returned transformation will
be associated with the given `tag`.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1528 | 1526 | latency_stats | tensorflow/tensorflow/python/data/experimental/ops/stats_ops.py | 75 | function | Records the latency of producing each element of the input dataset.
To consume the statistics, associate a `StatsAggregator` with the output
dataset.
Args:
tag: String. All statistics recorded by the returned transformation will
be associated with the given `tag`.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1529 | 1527 | StatsOptions | tensorflow/tensorflow/python/data/experimental/ops/stats_options.py | 28 | class | Represents options for collecting dataset stats using `StatsAggregator`.
You can set the stats options of a dataset through the `experimental_stats`
property of `tf.data.Options`; the property is an instance of
`tf.data.experimental.StatsOptions`. For example, to collect latency stats
on all dataset edges, use the following pattern:
```python
aggregator = tf.data.experimental.StatsAggregator()
options = tf.data.Options()
options.experimental_stats.aggregator = aggregator
options.experimental_stats.latency_all_edges = True
dataset = dataset.with_options(options)
``` |
1530 | 1528 | take_while | tensorflow/tensorflow/python/data/experimental/ops/take_while_ops.py | 56 | function | A transformation that stops dataset iteration based on a `predicate`.
Args:
predicate: A function that maps a nested structure of tensors (having shapes
and types defined by `self.output_shapes` and `self.output_types`) to a
scalar `tf.bool` tensor.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1531 | 1529 | assert_next | tensorflow/tensorflow/python/data/experimental/ops/testing.py | 26 | function | A transformation that asserts which transformations happen next.
Transformations should be referred to by their base name, not including
version suffix. For example, use "Batch" instead of "BatchV2". "Batch" will
match any of "Batch", "BatchV1", "BatchV2", etc.
Args:
transformations: A `tf.string` vector `tf.Tensor` identifying the
transformations that are expected to happen next.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1532 | 1530 | non_serializable | tensorflow/tensorflow/python/data/experimental/ops/testing.py | 49 | function | A non-serializable identity transformation.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1533 | 1531 | sleep | tensorflow/tensorflow/python/data/experimental/ops/testing.py | 64 | function | Sleeps for `sleep_microseconds` before producing each input element.
Args:
sleep_microseconds: The number of microseconds to sleep before producing an
input element.
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1534 | 1532 | ThreadingOptions | tensorflow/tensorflow/python/data/experimental/ops/threading_options.py | 26 | class | Represents options for dataset threading.
You can set the threading options of a dataset through the
`experimental_threading` property of `tf.data.Options`; the property is
an instance of `tf.data.experimental.ThreadingOptions`.
```python
options = tf.data.Options()
options.experimental_threading.private_threadpool_size = 10
dataset = dataset.with_options(options)
``` |
1535 | 1533 | PrivateThreadPool | tensorflow/tensorflow/python/data/experimental/ops/threadpool.py | 41 | class | A stateful resource that represents a private thread pool. |
1536 | 1534 | override_threadpool | tensorflow/tensorflow/python/data/experimental/ops/threadpool.py | 78 | function | Returns a new dataset that uses the given thread pool for its operations.
Args:
dataset: A `tf.data.Dataset` object.
thread_pool: A `PrivateThreadPool` object.
Returns:
A dataset containing the same values as `dataset`, but which uses
`thread_pool` to compute any of its parallel operations (such as
`tf.data.Dataset.map`). |
1537 | 1535 | unique | tensorflow/tensorflow/python/data/experimental/ops/unique.py | 27 | function | Creates a `Dataset` from another `Dataset`, discarding duplicates.
Use this transformation to produce a dataset that contains one instance of
each unique element in the input. For example:
```python
dataset = tf.data.Dataset.from_tensor_slices([1, 37, 2, 37, 2, 1])
# Using `unique()` will drop the duplicate elements.
dataset = dataset.apply(tf.data.experimental.unique()) # ==> { 1, 37, 2 }
```
Returns:
A `Dataset` transformation function, which can be passed to
`tf.data.Dataset.apply`. |
1538 | 1536 | TFRecordWriter | tensorflow/tensorflow/python/data/experimental/ops/writers.py | 30 | class | Writes a dataset to a TFRecord file.
The elements of the dataset must be scalar strings. To serialize dataset
elements as strings, you can use the `tf.io.serialize_tensor` function.
```python
dataset = tf.data.Dataset.range(3)
dataset = dataset.map(tf.io.serialize_tensor)
writer = tf.data.experimental.TFRecordWriter("/path/to/file.tfrecord")
writer.write(dataset)
```
To read back the elements, use `TFRecordDataset`.
```python
dataset = tf.data.TFRecordDataset("/path/to/file.tfrecord")
dataset = dataset.map(lambda x: tf.io.parse_tensor(x, tf.int64))
```
To shard a `dataset` across multiple TFRecord files:
```python
dataset = ... # dataset to be written
def reduce_func(key, dataset):
filename = tf.strings.join([PATH_PREFIX, tf.strings.as_string(key)])
writer = tf.data.experimental.TFRecordWriter(filename)
writer.write(dataset.map(lambda _, x: x))
return tf.data.Dataset.from_tensors(filename)
dataset = dataset.enumerate()
dataset = dataset.apply(tf.data.experimental.group_by_window(
lambda i, _: i % NUM_SHARDS, reduce_func, tf.int64.max
))
``` |
1539 | 1537 | write | tensorflow/tensorflow/python/data/experimental/ops/writers.py | 85 | method | Writes a dataset to a TFRecord file.
An operation that writes the content of the specified dataset to the file
specified in the constructor.
If the file exists, it will be overwritten.
Args:
dataset: a `tf.data.Dataset` whose elements are to be written to a file
Returns:
In graph mode, this returns an operation which when executed performs the
write. In eager mode, the write is performed by the method itself and
there is no return value.
Raises
TypeError: if `dataset` is not a `tf.data.Dataset`.
TypeError: if the elements produced by the dataset are not scalar strings. |
1540 | 1538 | DispatchServer | tensorflow/tensorflow/python/data/experimental/service/server_lib.py | 29 | class | An in-process tf.data service dispatch server.
A `tf.data.experimental.service.DispatchServer` coordinates a cluster of
`tf.data.experimental.service.WorkerServer`s. When the workers start, they
register themselves with the dispatcher.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
>>> dispatcher_address = dispatcher.target.split("://")[1]
>>> worker = tf.data.experimental.service.WorkerServer(
... port=0, dispatcher_address=dispatcher_address)
>>> dataset = tf.data.Dataset.range(10)
>>> dataset = dataset.apply(tf.data.experimental.service.distribute(
... processing_mode="parallel_epochs", service=dispatcher.target))
>>> print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
When starting a dedicated tf.data dispatch process, use join() to block
indefinitely after starting up the server.
```
dispatcher = tf.data.experimental.service.DispatchServer(port=5050)
dispatcher.join()
``` |
1541 | 1539 | start | tensorflow/tensorflow/python/data/experimental/service/server_lib.py | 78 | method | Starts this server.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0,
... start=False)
>>> dispatcher.start()
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
starting the server. |
1542 | 1540 | join | tensorflow/tensorflow/python/data/experimental/service/server_lib.py | 91 | method | Blocks until the server has shut down.
This is useful when starting a dedicated dispatch process.
```
dispatcher = tf.data.experimental.service.DispatchServer(port=5050)
dispatcher.join()
```
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
joining the server. |
1543 | 1541 | target | tensorflow/tensorflow/python/data/experimental/service/server_lib.py | 108 | method | Returns a target that can be used to connect to the server.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
>>> dataset = tf.data.Dataset.range(10)
>>> dataset = dataset.apply(tf.data.experimental.service.distribute(
... processing_mode="parallel_epochs", service=dispatcher.target))
The returned string will be in the form protocol://address, e.g.
"grpc://localhost:5050". |
1544 | 1542 | WorkerServer | tensorflow/tensorflow/python/data/experimental/service/server_lib.py | 148 | class | An in-process tf.data service worker server.
A `tf.data.experimental.service.WorkerServer` performs `tf.data.Dataset`
processing for user-defined datasets, and provides the resulting elements over
RPC. A worker is associated with a single
`tf.data.experimental.service.DispatchServer`.
>>> dispatcher = tf.data.experimental.service.DispatchServer(port=0)
>>> dispatcher_address = dispatcher.target.split("://")[1]
>>> worker = tf.data.experimental.service.WorkerServer(
... port=0, dispatcher_address=dispatcher_address)
>>> dataset = tf.data.Dataset.range(10)
>>> dataset = dataset.apply(tf.data.experimental.service.distribute(
... processing_mode="parallel_epochs", service=dispatcher.target))
>>> print(list(dataset.as_numpy_iterator()))
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
When starting a dedicated tf.data worker process, use join() to block
indefinitely after starting up the server.
```
worker = tf.data.experimental.service.WorkerServer(
port=5051, dispatcher_address="grpc://localhost:5050")
worker.join()
``` |
1545 | 1543 | start | tensorflow/tensorflow/python/data/experimental/service/server_lib.py | 213 | method | Starts this server.
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
starting the server. |
1546 | 1544 | join | tensorflow/tensorflow/python/data/experimental/service/server_lib.py | 222 | method | Blocks until the server has shut down.
This is useful when starting a dedicated worker process.
```
worker_server = tf.data.experimental.service.WorkerServer(
port=5051, dispatcher_address="grpc://localhost:5050")
worker_server.join()
```
This method currently blocks forever.
Raises:
tf.errors.OpError: Or one of its subclasses if an error occurs while
joining the server. |
1547 | 1545 | Foo | tensorflow/tensorflow/python/data/kernel_tests/map_test.py | 153 | class | Dummy class used for invalid return value tests. |
1548 | 1546 | eager_only_combinations | tensorflow/tensorflow/python/data/kernel_tests/test_base.py | 44 | function | Returns the default test combinations for eager mode only tf.data tests. |
1549 | 1547 | graph_only_combinations | tensorflow/tensorflow/python/data/kernel_tests/test_base.py | 49 | function | Returns the default test combinations for graph mode only tf.data tests. |
1550 | 1548 | v2_only_combinations | tensorflow/tensorflow/python/data/kernel_tests/test_base.py | 54 | function | Returns the default test combinations for v1 only tf.data tests. |
1551 | 1549 | DatasetV2 | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 106 | class | Represents a potentially large set of elements.
The `tf.data.Dataset` API supports writing descriptive and efficient input
pipelines. `Dataset` usage follows a common pattern:
1. Create a source dataset from your input data.
2. Apply dataset transformations to preprocess the data.
3. Iterate over the dataset and process the elements.
Iteration happens in a streaming fashion, so the full dataset does not need to
fit into memory.
Source Datasets:
The simplest way to create a dataset is to create it from a python `list`:
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> for element in dataset:
... print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
To process lines from files, use `tf.data.TextLineDataset`:
>>> dataset = tf.data.TextLineDataset(["file1.txt", "file2.txt"])
To process records written in the `TFRecord` format, use `TFRecordDataset`:
>>> dataset = tf.data.TFRecordDataset(["file1.tfrecords", "file2.tfrecords"])
To create a dataset of all files matching a pattern, use
`tf.data.Dataset.list_files`:
>>> dataset = tf.data.Dataset.list_files("/path/*.txt") # doctest: +SKIP
See `tf.data.FixedLengthRecordDataset` and `tf.data.Dataset.from_generator`
for more ways to create datasets.
Transformations:
Once you have a dataset, you can apply transformations to prepare the data for
your model:
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> dataset = dataset.map(lambda x: x*2)
>>> list(dataset.as_numpy_iterator())
[2, 4, 6]
Common Terms:
**Element**: A single output from calling `next()` on a dataset iterator.
Elements may be nested structures containing multiple components. For
example, the element `(1, (3, "apple"))` has one tuple nested in another
tuple. The components are `1`, `3`, and `"apple"`.
**Component**: The leaf in the nested structure of an element.
Supported types:
Elements can be nested structures of tuples, named tuples, and dictionaries.
Note that Python lists are *not* treated as nested structures of components.
Instead, lists are converted to tensors and treated as components. For
example, the element `(1, [1, 2, 3])` has only two components; the tensor `1`
and the tensor `[1, 2, 3]`. Element components can be of any type
representable by `tf.TypeSpec`, including `tf.Tensor`, `tf.data.Dataset`,
`tf.sparse.SparseTensor`, `tf.RaggedTensor`, and `tf.TensorArray`.
>>> a = 1 # Integer element
>>> b = 2.0 # Float element
>>> c = (1, 2) # Tuple element with 2 components
>>> d = {"a": (2, 2), "b": 3} # Dict element with 3 components
>>> Point = collections.namedtuple("Point", ["x", "y"]) # doctest: +SKIP
>>> e = Point(1, 2) # Named tuple # doctest: +SKIP
>>> f = tf.data.Dataset.range(10) # Dataset element |
1552 | 1550 | options | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 348 | method | Returns the options for this dataset and its inputs.
Returns:
A `tf.data.Options` object representing the dataset options. |
1553 | 1551 | element_spec | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 451 | method | The type specification of an element of this dataset.
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> dataset.element_spec
TensorSpec(shape=(), dtype=tf.int32, name=None)
Returns:
A nested structure of `tf.TypeSpec` objects matching the structure of an
element of this dataset and specifying the type of individual components. |
1554 | 1552 | as_numpy_iterator | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 472 | method | Returns an iterator which converts all elements of the dataset to numpy.
Use `as_numpy_iterator` to inspect the content of your dataset. To see
element shapes and types, print dataset elements directly instead of using
`as_numpy_iterator`.
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> for element in dataset:
... print(element)
tf.Tensor(1, shape=(), dtype=int32)
tf.Tensor(2, shape=(), dtype=int32)
tf.Tensor(3, shape=(), dtype=int32)
This method requires that you are running in eager mode and the dataset's
element_spec contains only `TensorSpec` components.
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> for element in dataset.as_numpy_iterator():
... print(element)
1
2
3
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> print(list(dataset.as_numpy_iterator()))
[1, 2, 3]
`as_numpy_iterator()` will preserve the nested structure of dataset
elements.
>>> dataset = tf.data.Dataset.from_tensor_slices({'a': ([1, 2], [3, 4]),
... 'b': [5, 6]})
>>> list(dataset.as_numpy_iterator()) == [{'a': (1, 3), 'b': 5},
... {'a': (2, 4), 'b': 6}]
True
Returns:
An iterable over the elements of the dataset, with their tensors converted
to numpy arrays.
Raises:
TypeError: if an element contains a non-`Tensor` value.
RuntimeError: if eager execution is not enabled. |
1555 | 1553 | from_tensors | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 570 | method | Creates a `Dataset` with a single element, comprising the given tensors.
`from_tensors` produces a dataset containing only a single element. To slice
the input tensor into multiple elements, use `from_tensor_slices` instead.
>>> dataset = tf.data.Dataset.from_tensors([1, 2, 3])
>>> list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32)]
>>> dataset = tf.data.Dataset.from_tensors(([1, 2, 3], 'A'))
>>> list(dataset.as_numpy_iterator())
[(array([1, 2, 3], dtype=int32), b'A')]
>>> # You can use `from_tensors` to produce a dataset which repeats
>>> # the same example many times.
>>> example = tf.constant([1,2,3])
>>> dataset = tf.data.Dataset.from_tensors(example).repeat(2)
>>> list(dataset.as_numpy_iterator())
[array([1, 2, 3], dtype=int32), array([1, 2, 3], dtype=int32)]
Note that if `tensors` contains a NumPy array, and eager execution is not
enabled, the values will be embedded in the graph as one or more
`tf.constant` operations. For large datasets (> 1 GB), this can waste
memory and run into byte limits of graph serialization. If `tensors`
contains one or more large NumPy arrays, consider the alternative described
in [this
guide](https://tensorflow.org/guide/data#consuming_numpy_arrays).
Args:
tensors: A dataset element.
Returns:
Dataset: A `Dataset`. |
1556 | 1554 | from_tensor_slices | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 607 | method | Creates a `Dataset` whose elements are slices of the given tensors.
The given tensors are sliced along their first dimension. This operation
preserves the structure of the input tensors, removing the first dimension
of each tensor and using it as the dataset dimension. All input tensors
must have the same size in their first dimensions.
>>> # Slicing a 1D tensor produces scalar tensor elements.
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> list(dataset.as_numpy_iterator())
[1, 2, 3]
>>> # Slicing a 2D tensor produces 1D tensor elements.
>>> dataset = tf.data.Dataset.from_tensor_slices([[1, 2], [3, 4]])
>>> list(dataset.as_numpy_iterator())
[array([1, 2], dtype=int32), array([3, 4], dtype=int32)]
>>> # Slicing a tuple of 1D tensors produces tuple elements containing
>>> # scalar tensors.
>>> dataset = tf.data.Dataset.from_tensor_slices(([1, 2], [3, 4], [5, 6]))
>>> list(dataset.as_numpy_iterator())
[(1, 3, 5), (2, 4, 6)]
>>> # Dictionary structure is also preserved.
>>> dataset = tf.data.Dataset.from_tensor_slices({"a": [1, 2], "b": [3, 4]})
>>> list(dataset.as_numpy_iterator()) == [{'a': 1, 'b': 3},
... {'a': 2, 'b': 4}]
True
>>> # Two tensors can be combined into one Dataset object.
>>> features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor
>>> labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
>>> dataset = Dataset.from_tensor_slices((features, labels))
>>> # Both the features and the labels tensors can be converted
>>> # to a Dataset object separately and combined after.
>>> features_dataset = Dataset.from_tensor_slices(features)
>>> labels_dataset = Dataset.from_tensor_slices(labels)
>>> dataset = Dataset.zip((features_dataset, labels_dataset))
>>> # A batched feature and label set can be converted to a Dataset
>>> # in similar fashion.
>>> batched_features = tf.constant([[[1, 3], [2, 3]],
... [[2, 1], [1, 2]],
... [[3, 3], [3, 2]]], shape=(3, 2, 2))
>>> batched_labels = tf.constant([['A', 'A'],
... ['B', 'B'],
... ['A', 'B']], shape=(3, 2, 1))
>>> dataset = Dataset.from_tensor_slices((batched_features, batched_labels))
>>> for element in dataset.as_numpy_iterator():
... print(element)
(array([[1, 3],
[2, 3]], dtype=int32), array([[b'A'],
[b'A']], dtype=object))
(array([[2, 1],
[1, 2]], dtype=int32), array([[b'B'],
[b'B']], dtype=object))
(array([[3, 3],
[3, 2]], dtype=int32), array([[b'A'],
[b'B']], dtype=object))
Note that if `tensors` contains a NumPy array, and eager execution is not
enabled, the values will be embedded in the graph as one or more
`tf.constant` operations. For large datasets (> 1 GB), this can waste
memory and run into byte limits of graph serialization. If `tensors`
contains one or more large NumPy arrays, consider the alternative described
in [this guide](
https://tensorflow.org/guide/data#consuming_numpy_arrays).
Args:
tensors: A dataset element, with each component having the same size in
the first dimension.
Returns:
Dataset: A `Dataset`. |
1557 | 1555 | from_generator | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 721 | method | Creates a `Dataset` whose elements are generated by `generator`.
The `generator` argument must be a callable object that returns
an object that supports the `iter()` protocol (e.g. a generator function).
The elements generated by `generator` must be compatible with the given
`output_types` and (optional) `output_shapes` arguments.
>>> import itertools
>>>
>>> def gen():
... for i in itertools.count(1):
... yield (i, [1] * i)
>>>
>>> dataset = tf.data.Dataset.from_generator(
... gen,
... (tf.int64, tf.int64),
... (tf.TensorShape([]), tf.TensorShape([None])))
>>>
>>> list(dataset.take(3).as_numpy_iterator())
[(1, array([1])), (2, array([1, 1])), (3, array([1, 1, 1]))]
Note: The current implementation of `Dataset.from_generator()` uses
`tf.numpy_function` and inherits the same constraints. In particular, it
requires the dataset and iterator related operations to be placed
on a device in the same process as the Python program that called
`Dataset.from_generator()`. The body of `generator` will not be
serialized in a `GraphDef`, and you should not use this method if you
need to serialize your model and restore it in a different environment.
Note: If `generator` depends on mutable global variables or other external
state, be aware that the runtime may invoke `generator` multiple times
(in order to support repeating the `Dataset`) and at any time
between the call to `Dataset.from_generator()` and the production of the
first element from the generator. Mutating global variables or external
state can cause undefined behavior, and we recommend that you explicitly
cache any external state in `generator` before calling
`Dataset.from_generator()`.
Args:
generator: A callable object that returns an object that supports the
`iter()` protocol. If `args` is not specified, `generator` must take no
arguments; otherwise it must take as many arguments as there are values
in `args`.
output_types: A nested structure of `tf.DType` objects corresponding to
each component of an element yielded by `generator`.
output_shapes: (Optional.) A nested structure of `tf.TensorShape` objects
corresponding to each component of an element yielded by `generator`.
args: (Optional.) A tuple of `tf.Tensor` objects that will be evaluated
and passed to `generator` as NumPy-array arguments.
Returns:
Dataset: A `Dataset`. |
1558 | 1556 | range | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 921 | method | Creates a `Dataset` of a step-separated range of values.
>>> list(Dataset.range(5).as_numpy_iterator())
[0, 1, 2, 3, 4]
>>> list(Dataset.range(2, 5).as_numpy_iterator())
[2, 3, 4]
>>> list(Dataset.range(1, 5, 2).as_numpy_iterator())
[1, 3]
>>> list(Dataset.range(1, 5, -2).as_numpy_iterator())
[]
>>> list(Dataset.range(5, 1).as_numpy_iterator())
[]
>>> list(Dataset.range(5, 1, -2).as_numpy_iterator())
[5, 3]
>>> list(Dataset.range(2, 5, output_type=tf.int32).as_numpy_iterator())
[2, 3, 4]
>>> list(Dataset.range(1, 5, 2, output_type=tf.float32).as_numpy_iterator())
[1.0, 3.0]
Args:
*args: follows the same semantics as python's xrange.
len(args) == 1 -> start = 0, stop = args[0], step = 1.
len(args) == 2 -> start = args[0], stop = args[1], step = 1.
len(args) == 3 -> start = args[0], stop = args[1], step = args[2].
**kwargs:
- output_type: Its expected dtype. (Optional, default: `tf.int64`).
Returns:
Dataset: A `RangeDataset`.
Raises:
ValueError: if len(args) == 0. |
1559 | 1557 | zip | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 958 | method | Creates a `Dataset` by zipping together the given datasets.
This method has similar semantics to the built-in `zip()` function
in Python, with the main difference being that the `datasets`
argument can be an arbitrary nested structure of `Dataset` objects.
>>> # The nested structure of the `datasets` argument determines the
>>> # structure of elements in the resulting dataset.
>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
>>> b = tf.data.Dataset.range(4, 7) # ==> [ 4, 5, 6 ]
>>> ds = tf.data.Dataset.zip((a, b))
>>> list(ds.as_numpy_iterator())
[(1, 4), (2, 5), (3, 6)]
>>> ds = tf.data.Dataset.zip((b, a))
>>> list(ds.as_numpy_iterator())
[(4, 1), (5, 2), (6, 3)]
>>>
>>> # The `datasets` argument may contain an arbitrary number of datasets.
>>> c = tf.data.Dataset.range(7, 13).batch(2) # ==> [ [7, 8],
... # [9, 10],
... # [11, 12] ]
>>> ds = tf.data.Dataset.zip((a, b, c))
>>> for element in ds.as_numpy_iterator():
... print(element)
(1, 4, array([7, 8]))
(2, 5, array([ 9, 10]))
(3, 6, array([11, 12]))
>>>
>>> # The number of elements in the resulting dataset is the same as
>>> # the size of the smallest dataset in `datasets`.
>>> d = tf.data.Dataset.range(13, 15) # ==> [ 13, 14 ]
>>> ds = tf.data.Dataset.zip((a, d))
>>> list(ds.as_numpy_iterator())
[(1, 13), (2, 14)]
Args:
datasets: A nested structure of datasets.
Returns:
Dataset: A `Dataset`. |
1560 | 1558 | concatenate | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1002 | method | Creates a `Dataset` by concatenating the given dataset with this dataset.
>>> a = tf.data.Dataset.range(1, 4) # ==> [ 1, 2, 3 ]
>>> b = tf.data.Dataset.range(4, 8) # ==> [ 4, 5, 6, 7 ]
>>> ds = a.concatenate(b)
>>> list(ds.as_numpy_iterator())
[1, 2, 3, 4, 5, 6, 7]
>>> # The input dataset and dataset to be concatenated should have the same
>>> # nested structures and output types.
>>> c = tf.data.Dataset.zip((a, b))
>>> a.concatenate(c)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and (tf.int64, tf.int64)
>>> d = tf.data.Dataset.from_tensor_slices(["a", "b", "c"])
>>> a.concatenate(d)
Traceback (most recent call last):
TypeError: Two datasets to concatenate have different types
<dtype: 'int64'> and <dtype: 'string'>
Args:
dataset: `Dataset` to be concatenated.
Returns:
Dataset: A `Dataset`. |
1561 | 1559 | prefetch | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1031 | method | Creates a `Dataset` that prefetches elements from this dataset.
Most dataset input pipelines should end with a call to `prefetch`. This
allows later elements to be prepared while the current element is being
processed. This often improves latency and throughput, at the cost of
using additional memory to store prefetched elements.
Note: Like other `Dataset` methods, prefetch operates on the
elements of the input dataset. It has no concept of examples vs. batches.
`examples.prefetch(2)` will prefetch two elements (2 examples),
while `examples.batch(20).prefetch(2)` will prefetch 2 elements
(2 batches, of 20 examples each).
>>> dataset = tf.data.Dataset.range(3)
>>> dataset = dataset.prefetch(2)
>>> list(dataset.as_numpy_iterator())
[0, 1, 2]
Args:
buffer_size: A `tf.int64` scalar `tf.Tensor`, representing the maximum
number of elements that will be buffered when prefetching.
Returns:
Dataset: A `Dataset`. |
1562 | 1560 | list_files | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1060 | method | A dataset of all files matching one or more glob patterns.
The `file_pattern` argument should be a small number of glob patterns.
If your filenames have already been globbed, use
`Dataset.from_tensor_slices(filenames)` instead, as re-globbing every
filename with `list_files` may result in poor performance with remote
storage systems.
Note: The default behavior of this method is to return filenames in
a non-deterministic random shuffled order. Pass a `seed` or `shuffle=False`
to get results in a deterministic order.
Example:
If we had the following files on our filesystem:
- /path/to/dir/a.txt
- /path/to/dir/b.py
- /path/to/dir/c.py
If we pass "/path/to/dir/*.py" as the directory, the dataset
would produce:
- /path/to/dir/b.py
- /path/to/dir/c.py
Args:
file_pattern: A string, a list of strings, or a `tf.Tensor` of string type
(scalar or vector), representing the filename glob (i.e. shell wildcard)
pattern(s) that will be matched.
shuffle: (Optional.) If `True`, the file names will be shuffled randomly.
Defaults to `True`.
seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random
seed that will be used to create the distribution. See
`tf.random.set_seed` for behavior.
Returns:
Dataset: A `Dataset` of strings corresponding to file names. |
1563 | 1561 | repeat | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1128 | method | Repeats this dataset so each original value is seen `count` times.
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> dataset = dataset.repeat(3)
>>> list(dataset.as_numpy_iterator())
[1, 2, 3, 1, 2, 3, 1, 2, 3]
Note: If this dataset is a function of global state (e.g. a random number
generator), then different repetitions may produce different elements.
Args:
count: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the
number of times the dataset should be repeated. The default behavior (if
`count` is `None` or `-1`) is for the dataset be repeated indefinitely.
Returns:
Dataset: A `Dataset`. |
1564 | 1562 | enumerate | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1149 | method | Enumerates the elements of this dataset.
It is similar to python's `enumerate`.
>>> dataset = tf.data.Dataset.from_tensor_slices([1, 2, 3])
>>> dataset = dataset.enumerate(start=5)
>>> for element in dataset.as_numpy_iterator():
... print(element)
(5, 1)
(6, 2)
(7, 3)
>>> # The nested structure of the input dataset determines the structure of
>>> # elements in the resulting dataset.
>>> dataset = tf.data.Dataset.from_tensor_slices([(7, 8), (9, 10)])
>>> dataset = dataset.enumerate()
>>> for element in dataset.as_numpy_iterator():
... print(element)
(0, array([7, 8], dtype=int32))
(1, array([ 9, 10], dtype=int32))
Args:
start: A `tf.int64` scalar `tf.Tensor`, representing the start value for
enumeration.
Returns:
Dataset: A `Dataset`. |
1565 | 1563 | shuffle | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1182 | method | Randomly shuffles the elements of this dataset.
This dataset fills a buffer with `buffer_size` elements, then randomly
samples elements from this buffer, replacing the selected elements with new
elements. For perfect shuffling, a buffer size greater than or equal to the
full size of the dataset is required.
For instance, if your dataset contains 10,000 elements but `buffer_size` is
set to 1,000, then `shuffle` will initially select a random element from
only the first 1,000 elements in the buffer. Once an element is selected,
its space in the buffer is replaced by the next (i.e. 1,001-st) element,
maintaining the 1,000 element buffer.
`reshuffle_each_iteration` controls whether the shuffle order should be
different for each epoch. In TF 1.X, the idiomatic way to create epochs
was through the `repeat` transformation:
>>> dataset = tf.data.Dataset.range(3)
>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
>>> dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 2, 0]
>>> dataset = tf.data.Dataset.range(3)
>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
>>> dataset = dataset.repeat(2) # doctest: +SKIP
[1, 0, 2, 1, 0, 2]
In TF 2.0, `tf.data.Dataset` objects are Python iterables which makes it
possible to also create epochs through Python iteration:
>>> dataset = tf.data.Dataset.range(3)
>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=True)
>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 2, 0]
>>> dataset = tf.data.Dataset.range(3)
>>> dataset = dataset.shuffle(3, reshuffle_each_iteration=False)
>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP
[1, 0, 2]
Args:
buffer_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
elements from this dataset from which the new dataset will sample.
seed: (Optional.) A `tf.int64` scalar `tf.Tensor`, representing the random
seed that will be used to create the distribution. See
`tf.random.set_seed` for behavior.
reshuffle_each_iteration: (Optional.) A boolean, which if true indicates
that the dataset should be pseudorandomly reshuffled each time it is
iterated over. (Defaults to `True`.)
Returns:
Dataset: A `Dataset`. |
1566 | 1564 | cache | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1242 | method | Caches the elements in this dataset.
The first time the dataset is iterated over, its elements will be cached
either in the specified file or in memory. Subsequent iterations will
use the cached data.
Note: For the cache to be finalized, the input dataset must be iterated
through in its entirety. Otherwise, subsequent iterations will not use
cached data.
>>> dataset = tf.data.Dataset.range(5)
>>> dataset = dataset.map(lambda x: x**2)
>>> dataset = dataset.cache()
>>> # The first time reading through the data will generate the data using
>>> # `range` and `map`.
>>> list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
>>> # Subsequent iterations read from the cache.
>>> list(dataset.as_numpy_iterator())
[0, 1, 4, 9, 16]
When caching to a file, the cached data will persist across runs. Even the
first iteration through the data will read from the cache file. Changing
the input pipeline before the call to `.cache()` will have no effect until
the cache file is removed or the filename is changed.
>>> dataset = tf.data.Dataset.range(5)
>>> dataset = dataset.cache("/path/to/file") # doctest: +SKIP
>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
>>> dataset = tf.data.Dataset.range(10)
>>> dataset = dataset.cache("/path/to/file") # Same file! # doctest: +SKIP
>>> list(dataset.as_numpy_iterator()) # doctest: +SKIP
[0, 1, 2, 3, 4]
Note: `cache` will produce exactly the same elements during each iteration
through the dataset. If you wish to randomize the iteration order, make sure
to call `shuffle` *after* calling `cache`.
Args:
filename: A `tf.string` scalar `tf.Tensor`, representing the name of a
directory on the filesystem to use for caching elements in this Dataset.
If a filename is not provided, the dataset will be cached in memory.
Returns:
Dataset: A `Dataset`. |
1567 | 1565 | take | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1292 | method | Creates a `Dataset` with at most `count` elements from this dataset.
>>> dataset = tf.data.Dataset.range(10)
>>> dataset = dataset.take(3)
>>> list(dataset.as_numpy_iterator())
[0, 1, 2]
Args:
count: A `tf.int64` scalar `tf.Tensor`, representing the number of
elements of this dataset that should be taken to form the new dataset.
If `count` is -1, or if `count` is greater than the size of this
dataset, the new dataset will contain all elements of this dataset.
Returns:
Dataset: A `Dataset`. |
1568 | 1566 | skip | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1311 | method | Creates a `Dataset` that skips `count` elements from this dataset.
>>> dataset = tf.data.Dataset.range(10)
>>> dataset = dataset.skip(7)
>>> list(dataset.as_numpy_iterator())
[7, 8, 9]
Args:
count: A `tf.int64` scalar `tf.Tensor`, representing the number of
elements of this dataset that should be skipped to form the new dataset.
If `count` is greater than the size of this dataset, the new dataset
will contain no elements. If `count` is -1, skips the entire dataset.
Returns:
Dataset: A `Dataset`. |
1569 | 1567 | shard | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1330 | method | Creates a `Dataset` that includes only 1/`num_shards` of this dataset.
`shard` is deterministic. The Dataset produced by `A.shard(n, i)` will
contain all elements of A whose index mod n = i.
>>> A = tf.data.Dataset.range(10)
>>> B = A.shard(num_shards=3, index=0)
>>> list(B.as_numpy_iterator())
[0, 3, 6, 9]
>>> C = A.shard(num_shards=3, index=1)
>>> list(C.as_numpy_iterator())
[1, 4, 7]
>>> D = A.shard(num_shards=3, index=2)
>>> list(D.as_numpy_iterator())
[2, 5, 8]
This dataset operator is very useful when running distributed training, as
it allows each worker to read a unique subset.
When reading a single input file, you can shard elements as follows:
```python
d = tf.data.TFRecordDataset(input_file)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
Important caveats:
- Be sure to shard before you use any randomizing operator (such as
shuffle).
- Generally it is best if the shard operator is used early in the dataset
pipeline. For example, when reading from a set of TFRecord files, shard
before converting the dataset to input samples. This avoids reading every
file on every worker. The following is an example of an efficient
sharding strategy within a complete pipeline:
```python
d = Dataset.list_files(pattern)
d = d.shard(num_workers, worker_index)
d = d.repeat(num_epochs)
d = d.shuffle(shuffle_buffer_size)
d = d.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
d = d.map(parser_fn, num_parallel_calls=num_map_threads)
```
Args:
num_shards: A `tf.int64` scalar `tf.Tensor`, representing the number of
shards operating in parallel.
index: A `tf.int64` scalar `tf.Tensor`, representing the worker index.
Returns:
Dataset: A `Dataset`.
Raises:
InvalidArgumentError: if `num_shards` or `index` are illegal values.
Note: error checking is done on a best-effort basis, and errors aren't
guaranteed to be caught upon dataset creation. (e.g. providing in a
placeholder tensor bypasses the early checking, and will instead result
in an error during a session.run call.) |
1570 | 1568 | batch | tensorflow/tensorflow/python/data/ops/dataset_ops.py | 1398 | method | Combines consecutive elements of this dataset into batches.
>>> dataset = tf.data.Dataset.range(8)
>>> dataset = dataset.batch(3)
>>> list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7])]
>>> dataset = tf.data.Dataset.range(8)
>>> dataset = dataset.batch(3, drop_remainder=True)
>>> list(dataset.as_numpy_iterator())
[array([0, 1, 2]), array([3, 4, 5])]
The components of the resulting element will have an additional outer
dimension, which will be `batch_size` (or `N % batch_size` for the last
element if `batch_size` does not divide the number of input elements `N`
evenly and `drop_remainder` is `False`). If your program depends on the
batches having the same outer dimension, you should set the `drop_remainder`
argument to `True` to prevent the smaller batch from being produced.
Args:
batch_size: A `tf.int64` scalar `tf.Tensor`, representing the number of
consecutive elements of this dataset to combine in a single batch.
drop_remainder: (Optional.) A `tf.bool` scalar `tf.Tensor`, representing
whether the last batch should be dropped in the case it has fewer than
`batch_size` elements; the default behavior is not to drop the smaller
batch.
Returns:
Dataset: A `Dataset`. |