-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TOSA] Fix tfl.transpose_conv legalization when layer has bias #61493
base: master
Are you sure you want to change the base?
[TOSA] Fix tfl.transpose_conv legalization when layer has bias #61493
Conversation
* Legalization now handles cases where the layer has a bias
Hi @rsuderman Can you please review this PR ? Thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good in general, small changes and then can approve.
@@ -1618,6 +1619,23 @@ LogicalResult ConvertTFLTransposeConvOp::matchAndRewrite( | |||
bool output_is_qtype = | |||
output_type.getElementType().isa<mlir::quant::QuantizedType>(); | |||
|
|||
const bool has_bias = !(!tfl_conv_op.getBias() || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this
tfl_conv_op.getBias() && !isa(tfl_conv_op.getBias().getType())
? (it reads slightly easier to me than the double negative)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes agreed, this is done
if (has_bias) { | ||
RankedTensorType bias_type = | ||
dyn_cast<RankedTensorType>(tfl_conv_op.getBias().getType()); | ||
bool bias_is_qtype = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefer to use free functions here (isa<...>(X) instead of X.isa<...>), see MLIR deprecation page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, free functions are used across the whole function for consistency
if (input_is_qtype != bias_is_qtype) { | ||
return rewriter.notifyMatchFailure( | ||
op, | ||
"input/bias tensor should " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formatting here seems weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have left this as is for now, as there are instances of this function being used with this formatting across the file. But do let know if this needs looking at again before it can be merged!
std::optional<Value> zero_bias; | ||
if (input_is_qtype) { | ||
uint32_t input_bits = | ||
dyn_cast<mlir::quant::QuantizedType>(input_type.getElementType()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cast - if its dyn_cast without checking if none followed by dereference, then cast gives better error message/cheaper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
.getStorageTypeIntegralWidth(); | ||
|
||
if (input_bits == 16 && weight_bits == 8) { | ||
SmallVector<APInt> vec(output_channel, APInt(48, 0, true)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add comment here on the 48 bits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added a comment for this
Thank you for the review @jpienaar, I have posted replies to your comments. |
Hi @jpienaar, Can you please review this PR ? Thank you! |
Hi @jpienaar, could you review this PR please? Thanks! |
Hi @jpienaar, Can you please review this PR ? Thank you! |
1 similar comment
Hi @jpienaar, Can you please review this PR ? Thank you! |
Hi @jpienaar, could you review this PR please? Thanks! |
Hi @rdzhabarov, can you review this PR please? Thanks! |
Hi @jpienaar, Can you please review this PR ? Thank you! |
Hey @jpienaar, could you review this PR please? Many thanks! |
Hey @jpienaar, could you review this PR please? Thanks in advance! |
Hi @tom-arm Can you please resolve conflicts? Thank you! |
This PR is stale because it has been open for 14 days with no activity. It will be closed if no further activity occurs. Thank you. |
* Fixed merge conflicts in legalize_tfl.cc
Hi @gbaned, the conflict is fixed now. It would be good to get this reviewed if possible |
Hi @jpienaar, Can you please review this PR ? Thank you! |
Hi @jpienaar, Can you please review this PR ? Thank you! |
This PR fixes the legalization of tfl.transpose_conv in the case where the layer has a non-zero bias.